CN114693682A - Spine feature identification method based on image processing - Google Patents

Spine feature identification method based on image processing Download PDF

Info

Publication number
CN114693682A
CN114693682A CN202210610910.6A CN202210610910A CN114693682A CN 114693682 A CN114693682 A CN 114693682A CN 202210610910 A CN202210610910 A CN 202210610910A CN 114693682 A CN114693682 A CN 114693682A
Authority
CN
China
Prior art keywords
frequency
spine
frequency pixel
pixel points
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210610910.6A
Other languages
Chinese (zh)
Other versions
CN114693682B (en
Inventor
马学晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Affiliated Hospital of University of Qingdao
Original Assignee
Affiliated Hospital of University of Qingdao
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Affiliated Hospital of University of Qingdao filed Critical Affiliated Hospital of University of Qingdao
Priority to CN202210610910.6A priority Critical patent/CN114693682B/en
Publication of CN114693682A publication Critical patent/CN114693682A/en
Application granted granted Critical
Publication of CN114693682B publication Critical patent/CN114693682B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of image processing, in particular to a spine feature identification method based on image processing.

Description

Spine feature identification method based on image processing
Technical Field
The application relates to the field of image processing, in particular to a spine feature identification method based on image processing.
Background
For the spondylopathy, the medical image is needed to be shot to diagnose the disease condition, and usually, due to the different sizes of the medical equipment, most of the shot spondylopathy image is a line film, and only the spondylopathy of a local section can be observed. The standing type full spine image can be used for shooting a complete spine image, but the larger the image is, the more complex the information is, the limited dynamic range of the photosensitive element is, the limited contrast of some key details is, and the image is not easy to observe manually.
Therefore, in order to observe a complete spine image more clearly, the invention optimizes and enhances the spine image characteristics in the spine medical image by using an image processing technology on the basis of machine vision to obtain an image with clear contrast of spine and bone textures and muscle contours, and image detail information is also kept, and the method is intelligent and accurate.
Disclosure of Invention
The invention provides a spine feature identification method based on image processing, which solves the problem of unsharpness in a whole spine image in a medical image and adopts the following technical scheme:
acquiring a full spine lateral image map, and performing semantic segmentation on the full spine lateral image map to obtain a full spine image map;
detecting the whole spine image map by using a sobel operator to obtain gradient edge pixel points of the whole spine image map;
carrying out frequency division filtering on the whole spine image map to obtain high-frequency pixel points and low-frequency pixel points of the whole spine image map;
determining spine edge high-frequency pixel points and non-spine edge high-frequency pixel points in the high-frequency pixel points by utilizing the gradient edge pixel points;
taking the corresponding gray levels of the spine edge high-frequency pixel points, the non-spine edge high-frequency pixel points and the low-frequency pixel points as the gray level of the spine edge high-frequency pixel points, the gray level of the non-spine edge high-frequency pixel points and the gray level of the low-frequency pixel points;
obtaining a gray level histogram according to the gray level of the vertebra edge high-frequency pixel points, the gray level of the non-vertebra edge high-frequency pixel points and the frequency of the gray level of the low-frequency pixel points appearing in the whole vertebra image map;
acquiring the average frequency of the gray levels of all low-frequency pixel points in the gray histogram and the frequency of the gray level of each high-frequency pixel point at the edge of the vertebra;
obtaining the weight ratio of the gray levels of all the low-frequency pixel points and the gray level of each spine edge high-frequency pixel point according to the average frequency of the gray levels of all the low-frequency pixel points and the frequency of the gray level of each spine edge high-frequency pixel point;
determining the gray levels of the low-frequency pixel points, the gray levels of the spine edge high-frequency pixel points and the weights of the gray levels of the non-spine edge high-frequency pixel points according to the gray levels of all the low-frequency pixel points and the weight ratio of the gray levels of each spine edge high-frequency pixel point;
and constructing an accumulation mapping function according to the gray levels of the low-frequency pixel points, the gray levels of the high-frequency pixel points at the spine edge and the gray levels of the high-frequency pixel points at the non-spine edge, and performing histogram equalization on the full spine image map by using the accumulation mapping function to obtain the processed full spine image map.
The method for determining the spine edge high-frequency pixel points and the non-spine edge high-frequency pixel points in the high-frequency pixel points comprises the following steps:
taking gradient edge pixel points of the whole spine image as spine edge high-frequency pixel points in the high-frequency pixel points;
and other pixels except the high-frequency pixels at the edge of the spine in the high-frequency pixels are non-high-frequency pixels at the edge of the spine.
The method for obtaining the weight ratio of the gray levels of all the low-frequency pixel points to the gray level of each high-frequency pixel point at the edge of the spine comprises the following steps:
calculating the average frequency of the gray levels of all low-frequency pixel points in the gray histogram;
obtaining the average value of the entropies of the gray levels of all the low-frequency pixel points according to the average frequency of the gray levels of all the low-frequency pixel points;
acquiring the frequency of the gray level of each high-frequency pixel point at the edge of each vertebra;
and taking the ratio of the frequency of the gray level of each spine edge high-frequency pixel point to the average value of the entropies of the gray levels of all low-frequency pixel points as the weight ratio of the gray level of each low-frequency pixel point to the gray level of each spine edge high-frequency pixel point.
The method for determining the weight of the gray level of the low-frequency pixel point, the gray level of the high-frequency pixel point at the edge of the spine and the gray level of the high-frequency pixel point at the edge of the non-spine comprises the following steps:
calculating a weight average value of the gray levels of the high-frequency pixel points at the edge of the spine according to the weight ratio of the gray levels of the low-frequency pixel points to the gray level of each high-frequency pixel point at the edge of the spine;
setting the ratio of the weight average value of the gray level of the spine edge high-frequency pixel points to the gray level weight of all non-spine edge high-frequency pixel points as 1: 2;
and calculating the gray level of the low-frequency pixel, the gray level of the spine edge high-frequency pixel and the weight of the gray level of the non-spine edge high-frequency pixel according to the gray level of the low-frequency pixel and the weight ratio of the gray level of each spine edge high-frequency pixel, the weight average value of the gray level of the spine edge high-frequency pixel and the gray level weight ratio of all non-spine edge high-frequency pixels.
The cumulative mapping function is as follows:
Figure 100002_DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,
Figure 704085DEST_PATH_IMAGE002
the weight of the gray level of the high-frequency pixel point at the edge of the vertebra,
Figure 100002_DEST_PATH_IMAGE003
the weight of the gray level of the non-vertebral edge high-frequency pixel point,
Figure 965302DEST_PATH_IMAGE004
is the weight of the gray level of the low-frequency pixel,
Figure 100002_DEST_PATH_IMAGE005
for the gray scale cumulative distribution function of the histogram equalized image,
Figure 561412DEST_PATH_IMAGE006
is shown as
Figure 100002_DEST_PATH_IMAGE007
The mapping function of the original image and the equalized image at each gray level,
Figure 882671DEST_PATH_IMAGE008
is shown in the original image
Figure 100002_DEST_PATH_IMAGE009
The frequency with which the individual gray levels occur,
Figure 745323DEST_PATH_IMAGE010
is the total number of pixel points and is the gray level of the original image
Figure 100002_DEST_PATH_IMAGE011
Figure 295384DEST_PATH_IMAGE012
As is the number of grey levels in the original image,
Figure 100002_DEST_PATH_IMAGE013
for the purpose of the normalized gray scale level,
Figure 84349DEST_PATH_IMAGE014
Figure 100002_DEST_PATH_IMAGE015
,
Figure 576510DEST_PATH_IMAGE016
Figure 100002_DEST_PATH_IMAGE017
respectively representing gray levels in the original image as
Figure 864141DEST_PATH_IMAGE018
Figure 100002_DEST_PATH_IMAGE019
Figure 732739DEST_PATH_IMAGE020
Figure 658101DEST_PATH_IMAGE007
The number of the pixel points.
The invention has the beneficial effects that: based on image processing, detecting the whole spine image map by using sobel operator to obtain a gradient edge image of the whole spine image map, frequency division filtering is carried out on the whole spine image map to obtain high-frequency pixel points and low-frequency pixel points in the whole spine image map, dividing the high-frequency pixel points into spine edge high-frequency pixel points and non-spine edge high-frequency pixel points according to the high-frequency pixel points and the gradient edge pixel points of the whole spine image map, according to the frequency of the gray level distribution of the spine edge high-frequency pixel points, the non-spine edge high-frequency pixel points and the low-frequency pixel points in the gray level histogram, the gray levels of the high-frequency pixel points at the spinal edge, the high-frequency pixel points at the non-spinal edge and the low-frequency pixel points are subjected to weight distribution, the mapping accumulation function is obtained through weight distribution, histogram equalization is carried out on the whole spine image to obtain the whole spine image with clear contrast and complete details, and the method improves the identification degree of the spine medical image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a spine feature identification method based on image processing according to the present invention;
FIG. 2a is a schematic view of a full spine image of a spine feature recognition method based on image processing according to the present invention;
fig. 2b is a schematic diagram of a full spine image after histogram equalization in a spine feature identification method based on image processing according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of a spine feature identification method based on image processing according to the present invention is shown in fig. 1, and includes:
the method comprises the following steps: acquiring a lateral image map of the whole spine, and performing semantic segmentation to obtain the image map of the whole spine;
the purpose of this step is, gather the whole backbone image in the medical image, extract the backbone part image among them, as the basis of the subsequent data analysis.
It should be noted that: the whole spinal column film is obtained from a database of a hospital, comprises all bones from cervical vertebra to femoral shaft, and is formed by splicing a chest film and a waist film. But not a single chest film and a single waist film, because of multiple imaging, focal distance is difficult to align and coincide, vertebral body images of the two films cannot be connected, and angle measurement is inaccurate.
The semantic segmentation method comprises the following steps:
for the processing of the whole spine image map, the influence of non-target areas is removed as much as possible, and CNN semantic segmentation is adopted:
(1) CNN is an Encoder-Decoder network, and the CNN is calculated according to the following ratio of 7: a scale of 3 divides the data set into a training set and a test set.
(2) The spine region is labeled 1 and all other regions are labeled 0.
(3) The loss function used by the network is a cross entropy loss function.
The whole spine image and the background image can be obtained through semantic segmentation.
Step two: detecting the whole spine image map by using a sobel operator to obtain gradient edge pixel points of the whole spine image map; carrying out frequency division filtering on the whole spine image map to obtain high-frequency pixel points and low-frequency pixel points of the whole spine image map;
the purpose of this step is to carry out edge detection and frequency division on the image and classify the pixel points in the image.
It should be noted that, to improve the recognition effect of the spine image, the edge and texture details of the spine need to be enhanced, but a large amount of detail information of the spine is lost by directly performing histogram equalization processing on a complete full spine medical image, for example, fig. 2a is a full spine image, and fig. 2b is an image after histogram equalization of fig. 2a, it can be seen that a large amount of detail information is lost in the image, for example, contour information of excessively high brightness at the end of the lumbar vertebra is lost, the cervical vertebra is dark, the texture of the bony spur is lost, the bone edge becomes rough and unsmooth, and the like, because many pixels distributed less in the image are easily submerged, and because when the original image is mapped to the output image in the histogram equalization process, a small number of gray levels approximate to the accumulation result are rounded and combined, the gray scale range is compressed and therefore the detail features can be preserved as long as these low distributed gray levels are preserved.
In the embodiment, the detail information is processed by using a frequency division filter, the enhancement of the spine image is realized by preventing the gray levels of the parts from being combined and then equalizing, so that the contrast is enlarged and the details of key parts are kept as much as possible. The detailed characteristics of the spine can be well reserved so as to facilitate the recognition and extraction of the machine.
The method for acquiring the gradient edge pixel points of the whole spine image map comprises the following steps:
calculating each pixel point of the target area by using sobel operator
Figure 100002_DEST_PATH_IMAGE021
Direction and
Figure 55585DEST_PATH_IMAGE022
gradient of direction
Figure DEST_PATH_IMAGE023
Counting all edge pixel point sets with gray gradient on vertebra
Figure 50085DEST_PATH_IMAGE024
The method for acquiring the high-frequency pixel points and the low-frequency pixel points in the whole spine image map comprises the following steps:
splitting an image into low frequency portions using a crossover filter
Figure DEST_PATH_IMAGE025
And high frequency
Figure 440484DEST_PATH_IMAGE026
Because a large amount of noise is generated in the process of medical image shooting, the types of the noise are various, some noises can blur images, some noises can gather to become dead points to destroy details of the images, and the noises in the images can be amplified during histogram equalization, the embodiment mainly protects the edge contour and detail information of the images, so that the problem of blurring the details by other linear filtering can be solved by adopting a median filtering algorithm.
Step three: determining spine edge high-frequency pixel points and non-spine edge high-frequency pixel points in the high-frequency pixel points by utilizing the gradient edge pixel points;
the purpose of the step is to classify the pixel points in the high-frequency information of the image, store the detail characteristics in the image,
the method for dividing the high-frequency pixel points into the spine edge high-frequency pixel points and the non-spine edge high-frequency pixel points comprises the following steps:
gradient edge pixel point set using a full spine image map
Figure DEST_PATH_IMAGE027
To eliminate high frequency part
Figure 266358DEST_PATH_IMAGE026
Of non-spinal edge parts
Figure 54316DEST_PATH_IMAGE028
Namely:
Figure 801693DEST_PATH_IMAGE030
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE031
is a high-frequency pixel point at the edge of the vertebra,
Figure 949777DEST_PATH_IMAGE028
is a high-frequency pixel point at the edge of non-vertebra,
Figure 833420DEST_PATH_IMAGE026
are high frequency pixels.
Step four: taking the gray levels corresponding to the vertebra edge high-frequency pixel points, the non-vertebra edge high-frequency pixel points and the low-frequency pixel points as the gray levels of the vertebra edge high-frequency pixel points, the non-vertebra edge high-frequency pixel points and the low-frequency pixel points respectively; obtaining a gray level histogram according to the gray level of the vertebra edge high-frequency pixel points, the gray level of the non-vertebra edge high-frequency pixel points and the frequency of the gray level of the low-frequency pixel points appearing in the whole vertebra image map; acquiring the average frequency of the gray levels of all low-frequency pixel points in the gray histogram and the frequency of the gray level of each high-frequency pixel point at the edge of the vertebra; obtaining the weight ratio of the gray levels of all the low-frequency pixel points and the gray level of each spine edge high-frequency pixel point according to the average frequency of the gray levels of all the low-frequency pixel points and the frequency of the gray level of each spine edge high-frequency pixel point; determining the gray levels of the low-frequency pixel points, the gray levels of the spine edge high-frequency pixel points and the weights of the gray levels of the non-spine edge high-frequency pixel points according to the gray levels of all the low-frequency pixel points and the weight ratio of the gray levels of each spine edge high-frequency pixel point;
the purpose of this step is to calculate the weight of the grey level according to the frequency of the grey level corresponding to the different kinds of pixel points in the histogram.
The gray levels of the spine edge high-frequency pixel points, the non-spine edge high-frequency pixel points and the low-frequency pixel points are gray levels corresponding to the gray values of the spine edge high-frequency pixel points, the gray levels corresponding to the gray values of the non-spine edge high-frequency pixel points and the gray levels corresponding to the gray values of the low-frequency pixel points, and the gray levels of the spine edge high-frequency pixel points, the non-spine edge high-frequency pixel points and the low-frequency pixel points are determined according to the gray levels of the spine edge high-frequency pixel points, the gray levels of the non-spine edge high-frequency pixel points and the gray levels of the low-frequency pixel pointsThe frequency of the level appearing in the whole spine image map is obtained to obtain a gray histogram, and the gray level of the high-frequency pixel point at the edge of the spine is given weight
Figure 550534DEST_PATH_IMAGE032
Weighting the gray level of the high-frequency pixel point at the edge of non-vertebra
Figure DEST_PATH_IMAGE033
Weighting the gray level of the low-frequency pixel
Figure 50786DEST_PATH_IMAGE034
The method for acquiring the weight ratio of the gray levels of all the low-frequency pixel points to the gray level of each high-frequency pixel point at the edge of the spine comprises the following steps:
(1) obtaining the frequency of each high-frequency vertebra gray level appearing in the gray histogram
Figure DEST_PATH_IMAGE035
And the average frequency of the gray levels of all low-frequency pixel points appearing in the gray histogram
Figure 550031DEST_PATH_IMAGE036
(3) Obtaining the average value of the entropies of the gray levels of all the low-frequency pixel points according to the average frequency of the gray levels of all the low-frequency pixel points appearing in the gray histogram
Figure DEST_PATH_IMAGE037
Figure DEST_PATH_IMAGE039
(3) Entropy average value according to gray level of low-frequency pixel point
Figure 819339DEST_PATH_IMAGE037
The ratio of the gray level of each high-frequency pixel point at the edge of the vertebra to the gray level of each high-frequency pixel point at the edge of the vertebra is obtained
Figure 198367DEST_PATH_IMAGE040
And low frequency pixel gray scale
Figure DEST_PATH_IMAGE041
The weight ratio of (A):
Figure 435183DEST_PATH_IMAGE042
=
Figure DEST_PATH_IMAGE043
in the formula, the first step is that,
Figure 721807DEST_PATH_IMAGE044
for each vertebra edge high frequency pixel gray level
Figure 596354DEST_PATH_IMAGE040
And low frequency pixel gray scale
Figure 411863DEST_PATH_IMAGE041
The weight ratio of (a) to (b),
Figure DEST_PATH_IMAGE045
entropy average of gray levels of low-frequency pixel points
Figure 886707DEST_PATH_IMAGE037
The ratio of the gray level of the high-frequency pixel point at the edge of each vertebra,
Figure 429552DEST_PATH_IMAGE046
is the frequency with which the grey levels occur,
Figure 142293DEST_PATH_IMAGE035
the frequency of the gray level of the high-frequency pixel points at the edge of the spine,
Figure 331966DEST_PATH_IMAGE036
the average frequency of the gray levels of the low-frequency pixels is equal to the frequency of the gray levels of the high-frequency pixels at the edge of the spineLow frequency grey scale average frequencies, they can be retained as much as the low frequency information.
It should be noted that, first, for the high-frequency edge portion with the highest importance degree, the histogram equalization is the most ideal state when the frequency of occurrence of each gray level is equal. However, the total number of the pixels is fixed, and many low-distributed gray levels will inevitably disappear due to combination during histogram equalization, so that the gray levels combined in the low-frequency information do not need to be considered, and only the high-frequency information is subjected to
Figure 762948DEST_PATH_IMAGE031
And
Figure 860217DEST_PATH_IMAGE028
the remaining processing is performed, only the frequency is adjusted, which inevitably causes the uncertainty of gray level combination, if the average frequency of the gray levels of the low-frequency pixels is too large, in order to make the average frequency larger
Figure 178197DEST_PATH_IMAGE035
=
Figure 335508DEST_PATH_IMAGE036
More low-profile gray levels are merged, which ignores the richness of the gray levels. The invention introduces entropy to optimize the defect, the entropy describes the disorder degree of the system, the larger the entropy is, the larger the uncertainty of the system is, so that the larger the image information amount is, and after each easily swallowed high-frequency low-distribution gray level is endowed with a weight value, the entropy is equal to the average value of the gray level entropy of the low-frequency pixel points, and the high-frequency gray levels can be reserved.
The method for obtaining the weights of the gray level of the low-frequency pixel point, the gray level of the spine edge high-frequency pixel point and the gray level of the non-spine edge high-frequency pixel point comprises the following steps:
(1) this embodiment is directed to non-spinal edge high frequency pixel
Figure 253786DEST_PATH_IMAGE028
Corresponding non-spine edge high frequency pixel gray scaleStage
Figure DEST_PATH_IMAGE047
Endowing a uniform weight value
Figure 935172DEST_PATH_IMAGE048
Calculating the weight mean value of the gray levels of all the vertebra edge high-frequency pixel points, and weighting the gray levels of the non-vertebra edge high-frequency pixel points
Figure 622505DEST_PATH_IMAGE047
And the weight mean value of the gray levels of all the high-frequency pixel points at the edge of the spine
Figure DEST_PATH_IMAGE049
Is set to
Figure 216298DEST_PATH_IMAGE050
Because for
Figure 372603DEST_PATH_IMAGE028
It contains not only high-frequency information on the spine but also high-frequency information on muscle texture, organs, and the like. And therefore less important than the spine edge, a portion of the gray levels may be "sacrificed" to ensure preservation of the spine edge gray levels during histogram equalization.
(2) According to the weight ratio of the gray levels of all low-frequency pixel points and the gray level of each high-frequency pixel point at the edge of the vertebra, namely
Figure 77254DEST_PATH_IMAGE034
And
Figure 619094DEST_PATH_IMAGE032
the ratio, the proportional relation between the gray level weight of the non-spinal edge high-frequency pixel and the weight mean value of the gray levels of all spinal edge high-frequency pixels
Figure 118209DEST_PATH_IMAGE047
And
Figure 626DEST_PATH_IMAGE049
the ratio of the gray scale to the gray scale of the high-frequency pixels at the edge of the spine to the gray scale of the low-frequency pixels and the gray scale of the high-frequency pixels at the edge of the non-spine
Figure 977809DEST_PATH_IMAGE032
Figure 639735DEST_PATH_IMAGE047
Figure 309750DEST_PATH_IMAGE034
And carrying out weight distribution according to the proportional relation.
For example, the calculation method of the gray level weight of the high-frequency pixel point at the edge of the spine, the gray level weight of the low-frequency pixel point and the gray level weight of the high-frequency pixel point at the edge of the non-spine is illustrated as follows:
(1) suppose the gray level of the high-frequency pixel point at the edge of the vertebra is
Figure DEST_PATH_IMAGE051
Figure 909490DEST_PATH_IMAGE052
The gray level of the non-vertebral edge high-frequency pixel point is
Figure DEST_PATH_IMAGE053
Figure 221523DEST_PATH_IMAGE054
Low frequency pixel point gray scale of
Figure 737955DEST_PATH_IMAGE034
;
(2) According to the method for obtaining the weight ratio of the gray levels of all the low-frequency pixel points to the gray level of each spine edge high-frequency pixel point, the weight ratio of the gray levels of all the low-frequency pixel points to the gray level of each spine edge high-frequency pixel point is obtained as
Figure DEST_PATH_IMAGE055
,
Figure 156036DEST_PATH_IMAGE056
Then, then
Figure 757918DEST_PATH_IMAGE051
=
Figure DEST_PATH_IMAGE057
,
Figure 624374DEST_PATH_IMAGE052
=
Figure 995313DEST_PATH_IMAGE058
;
(3) Obtaining the proportion of the gray level weight of the non-vertebra edge high-frequency pixel points to the weight average value of the gray levels of all vertebra edge high-frequency pixel points according to the step:
Figure DEST_PATH_IMAGE059
Figure 787557DEST_PATH_IMAGE060
wherein
Figure DEST_PATH_IMAGE061
=
Figure 142315DEST_PATH_IMAGE062
Then, then
Figure DEST_PATH_IMAGE063
Figure 812462DEST_PATH_IMAGE064
(4) Then obtain
Figure DEST_PATH_IMAGE065
Obtaining the gray level of the high-frequency pixel point at the edge of the vertebra according to the proportional relation
Figure 569065DEST_PATH_IMAGE051
Figure 955047DEST_PATH_IMAGE052
High frequency gray scale of non-vertebral edge pixel
Figure 531522DEST_PATH_IMAGE053
Figure 238316DEST_PATH_IMAGE054
Gray level of low frequency pixel
Figure 583847DEST_PATH_IMAGE034
The weight of (c):
Figure 203047DEST_PATH_IMAGE051
the weight is
Figure 751971DEST_PATH_IMAGE066
Figure 216450DEST_PATH_IMAGE052
The weight is
Figure DEST_PATH_IMAGE067
Figure 947646DEST_PATH_IMAGE053
The weight is
Figure 4593DEST_PATH_IMAGE068
Figure 24502DEST_PATH_IMAGE054
The weight is as follows
Figure 558251DEST_PATH_IMAGE068
Figure 612795DEST_PATH_IMAGE034
The weight is
Figure DEST_PATH_IMAGE069
Step five: and constructing an accumulation mapping function according to the gray level of the low-frequency pixel point, the gray level of the spine edge high-frequency pixel point and the weight of the gray level of the non-spine edge high-frequency pixel point, and performing histogram equalization on the whole spine image map by using the accumulation mapping function to obtain the processed whole spine image map.
The purpose of the step is to modify the mapping accumulation function of histogram equalization according to the gray level weight of the non-vertebra edge high-frequency pixel point, the gray level weight of the vertebra edge high-frequency pixel point and the gray level weight of the low-frequency pixel point, perform histogram equalization and enhance the image.
The cumulative mapping function after adding weight distribution during histogram equalization is as follows:
Figure 793372DEST_PATH_IMAGE070
in the formula, the first step is that,
Figure 97314DEST_PATH_IMAGE005
for the gray scale cumulative distribution function of the histogram equalized image,
Figure 169175DEST_PATH_IMAGE006
is shown as
Figure 327493DEST_PATH_IMAGE007
The mapping function of the original image and the equalized image at each gray level,
Figure 459397DEST_PATH_IMAGE008
is shown in the original image
Figure 453898DEST_PATH_IMAGE009
The frequency with which the individual gray levels occur,
Figure 329450DEST_PATH_IMAGE010
the total number of the pixel points is,
Figure 109318DEST_PATH_IMAGE007
for grey levels of the original image
Figure 412124DEST_PATH_IMAGE011
Figure 159500DEST_PATH_IMAGE012
As is the number of grey levels in the original image,
Figure 573164DEST_PATH_IMAGE013
for the purpose of the normalized gray scale level,
Figure 706073DEST_PATH_IMAGE014
Figure 914201DEST_PATH_IMAGE015
,
Figure 883294DEST_PATH_IMAGE016
Figure 100649DEST_PATH_IMAGE017
respectively representing the gray levels of the original image as
Figure 589530DEST_PATH_IMAGE018
Figure 437400DEST_PATH_IMAGE019
Figure 893789DEST_PATH_IMAGE020
Figure 649256DEST_PATH_IMAGE007
The number of the pixel points.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (5)

1. A spine feature identification method based on image processing is characterized by comprising the following steps:
acquiring a full spine lateral image map, and performing semantic segmentation on the full spine lateral image map to obtain a full spine image map;
detecting the whole spine image map by using a sobel operator to obtain gradient edge pixel points of the whole spine image map;
carrying out frequency division filtering on the whole spine image map to obtain high-frequency pixel points and low-frequency pixel points of the whole spine image map;
determining spine edge high-frequency pixel points and non-spine edge high-frequency pixel points in the high-frequency pixel points by utilizing the gradient edge pixel points;
taking the gray levels corresponding to the vertebra edge high-frequency pixel points, the non-vertebra edge high-frequency pixel points and the low-frequency pixel points as the gray levels of the vertebra edge high-frequency pixel points, the non-vertebra edge high-frequency pixel points and the low-frequency pixel points respectively;
obtaining a gray level histogram according to the gray level of the vertebra edge high-frequency pixel points, the gray level of the non-vertebra edge high-frequency pixel points and the frequency of the gray level of the low-frequency pixel points appearing in the whole vertebra image map;
acquiring the average frequency of the gray levels of all low-frequency pixel points in the gray histogram and the frequency of the gray level of each high-frequency pixel point at the edge of the vertebra;
obtaining the weight ratio of the gray levels of all the low-frequency pixel points and the gray level of each spine edge high-frequency pixel point according to the average frequency of the gray levels of all the low-frequency pixel points and the frequency of the gray level of each spine edge high-frequency pixel point;
determining the gray levels of the low-frequency pixel points, the gray levels of the spine edge high-frequency pixel points and the weights of the gray levels of the non-spine edge high-frequency pixel points according to the gray levels of all the low-frequency pixel points and the weight ratio of the gray levels of each spine edge high-frequency pixel point;
and constructing an accumulation mapping function according to the gray levels of the low-frequency pixel points, the gray levels of the high-frequency pixel points at the spine edge and the gray levels of the high-frequency pixel points at the non-spine edge, and performing histogram equalization on the full spine image map by using the accumulation mapping function to obtain the processed full spine image map.
2. The spine feature identification method based on image processing according to claim 1, wherein the method for determining spine edge high frequency pixel points and non-spine edge high frequency pixel points among the high frequency pixel points comprises:
taking gradient edge pixel points of the full spine image map as spine edge high-frequency pixel points in the high-frequency pixel points;
and other pixels except the high-frequency pixels at the edge of the spine in the high-frequency pixels are non-high-frequency pixels at the edge of the spine.
3. The spine feature identification method based on image processing according to claim 1, wherein the method for obtaining the weight ratio of the gray levels of all the low-frequency pixel points to the gray level of each spine edge high-frequency pixel point comprises:
calculating the average frequency of the gray levels of all low-frequency pixel points in the gray histogram;
obtaining the average value of the entropies of the gray levels of all the low-frequency pixel points according to the average frequency of the gray levels of all the low-frequency pixel points;
acquiring the frequency of the gray level of each high-frequency pixel point at the edge of each vertebra;
and taking the ratio of the frequency of the gray level of each spine edge high-frequency pixel point to the average value of the entropies of the gray levels of all low-frequency pixel points as the weight ratio of the gray level of each low-frequency pixel point to the gray level of each spine edge high-frequency pixel point.
4. The spine feature identification method based on image processing according to claim 3, wherein the method for determining the weight of the gray level of the low-frequency pixel, the gray level of the spine edge high-frequency pixel and the gray level of the non-spine edge high-frequency pixel comprises the following steps:
calculating a weight average value of the gray levels of the high-frequency pixel points at the edge of the spine according to the weight ratio of the gray levels of the low-frequency pixel points to the gray level of each high-frequency pixel point at the edge of the spine;
gray scale of high-frequency pixel point at spine edgeThe proportion of the weight mean value of the levels to the gray level weight of all the non-vertebral edge high-frequency pixel points is set as
Figure DEST_PATH_IMAGE001
Obtaining the proportional relation of the gray level of the low-frequency pixel point, the gray level of the spine edge high-frequency pixel point and the gray level of the non-spine edge high-frequency pixel point according to the weight ratio of the gray level of the low-frequency pixel point to the gray level of each spine edge high-frequency pixel point, the weight average value of the gray level of the spine edge high-frequency pixel point and the weight proportion of the gray levels of all non-spine edge high-frequency pixel points;
and carrying out weight distribution according to the proportional relation of the gray level of the low-frequency pixel point, the gray level of the high-frequency pixel point at the edge of the spine and the gray level of the high-frequency pixel point at the edge of the non-spine.
5. A spine feature identification method based on image processing according to claim 4, characterized in that the cumulative mapping function is as follows:
Figure 509486DEST_PATH_IMAGE002
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE003
the weight of the gray level of the high-frequency pixel point at the edge of the vertebra,
Figure 375942DEST_PATH_IMAGE004
the weight of the gray level of the non-vertebral edge high-frequency pixel point,
Figure DEST_PATH_IMAGE005
is the weight of the gray level of the low-frequency pixel,
Figure 481301DEST_PATH_IMAGE006
for the gray scale cumulative distribution function of the histogram equalized image,
Figure DEST_PATH_IMAGE007
is shown as
Figure 24278DEST_PATH_IMAGE008
The mapping function of the original image and the equalized image at each gray level,
Figure DEST_PATH_IMAGE009
is shown in the original image
Figure 899742DEST_PATH_IMAGE010
The frequency with which the individual gray levels occur,
Figure DEST_PATH_IMAGE011
the total number of the pixel points is,
Figure 819157DEST_PATH_IMAGE008
for grey levels of the original image
Figure 247864DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE013
As is the number of grey levels in the original image,
Figure 712474DEST_PATH_IMAGE014
for the purpose of the normalized gray scale level,
Figure DEST_PATH_IMAGE015
Figure 820108DEST_PATH_IMAGE016
,
Figure DEST_PATH_IMAGE017
Figure 166382DEST_PATH_IMAGE018
respectively representing the gray levels of the original image as
Figure DEST_PATH_IMAGE019
Figure 777492DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
Figure 171259DEST_PATH_IMAGE008
The number of the pixel points.
CN202210610910.6A 2022-06-01 2022-06-01 Spine feature identification method based on image processing Active CN114693682B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210610910.6A CN114693682B (en) 2022-06-01 2022-06-01 Spine feature identification method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210610910.6A CN114693682B (en) 2022-06-01 2022-06-01 Spine feature identification method based on image processing

Publications (2)

Publication Number Publication Date
CN114693682A true CN114693682A (en) 2022-07-01
CN114693682B CN114693682B (en) 2022-08-26

Family

ID=82131266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210610910.6A Active CN114693682B (en) 2022-06-01 2022-06-01 Spine feature identification method based on image processing

Country Status (1)

Country Link
CN (1) CN114693682B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830459A (en) * 2023-02-14 2023-03-21 山东省国土空间生态修复中心(山东省地质灾害防治技术指导中心、山东省土地储备中心) Method for detecting damage degree of mountain forest and grass life community based on neural network
CN117745722A (en) * 2024-02-20 2024-03-22 北京大学 Medical health physical examination big data optimization enhancement method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10336657A (en) * 1997-05-29 1998-12-18 Ricoh Co Ltd Image processor
JPH11136521A (en) * 1997-10-31 1999-05-21 Ricoh Co Ltd Picture data processor
CN106651818A (en) * 2016-11-07 2017-05-10 湖南源信光电科技有限公司 Improved Histogram equalization low-illumination image enhancement algorithm
CN108230260A (en) * 2017-12-06 2018-06-29 天津津航计算技术研究所 A kind of fusion method of new infrared image and twilight image
CN109919929A (en) * 2019-03-06 2019-06-21 电子科技大学 A kind of fissuring of tongue feature extracting method based on wavelet transformation
CN110852977A (en) * 2019-10-29 2020-02-28 天津大学 Image enhancement method for fusing edge gray level histogram and human eye visual perception characteristics
WO2020103601A1 (en) * 2018-11-21 2020-05-28 Zhejiang Dahua Technology Co., Ltd. Method and system for generating a fusion image
CN111899205A (en) * 2020-08-10 2020-11-06 国科天成(北京)科技有限公司 Image enhancement method of scene self-adaptive wide dynamic infrared thermal imaging
CN114494256A (en) * 2022-04-14 2022-05-13 武汉金龙电线电缆有限公司 Electric wire production defect detection method based on image processing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10336657A (en) * 1997-05-29 1998-12-18 Ricoh Co Ltd Image processor
JPH11136521A (en) * 1997-10-31 1999-05-21 Ricoh Co Ltd Picture data processor
CN106651818A (en) * 2016-11-07 2017-05-10 湖南源信光电科技有限公司 Improved Histogram equalization low-illumination image enhancement algorithm
CN108230260A (en) * 2017-12-06 2018-06-29 天津津航计算技术研究所 A kind of fusion method of new infrared image and twilight image
WO2020103601A1 (en) * 2018-11-21 2020-05-28 Zhejiang Dahua Technology Co., Ltd. Method and system for generating a fusion image
CN109919929A (en) * 2019-03-06 2019-06-21 电子科技大学 A kind of fissuring of tongue feature extracting method based on wavelet transformation
CN110852977A (en) * 2019-10-29 2020-02-28 天津大学 Image enhancement method for fusing edge gray level histogram and human eye visual perception characteristics
CN111899205A (en) * 2020-08-10 2020-11-06 国科天成(北京)科技有限公司 Image enhancement method of scene self-adaptive wide dynamic infrared thermal imaging
CN114494256A (en) * 2022-04-14 2022-05-13 武汉金龙电线电缆有限公司 Electric wire production defect detection method based on image processing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIXIA CAO 等,: "Research on feature extraction algorithm of pavement disease", 《2021 INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION ENGINEERING AND COMPUTER SCIENCE (EIECS)》 *
张倩,: "PET/CT肺部成像过程中的衰减校正和分割方法的研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
谷志鹏 等,: "耦合边缘检测与优化的多尺度遥感图像融合法", 《计算机工程与应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830459A (en) * 2023-02-14 2023-03-21 山东省国土空间生态修复中心(山东省地质灾害防治技术指导中心、山东省土地储备中心) Method for detecting damage degree of mountain forest and grass life community based on neural network
CN117745722A (en) * 2024-02-20 2024-03-22 北京大学 Medical health physical examination big data optimization enhancement method
CN117745722B (en) * 2024-02-20 2024-04-30 北京大学 Medical health physical examination big data optimization enhancement method

Also Published As

Publication number Publication date
CN114693682B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN114648530B (en) CT image processing method
CN114693682B (en) Spine feature identification method based on image processing
US7218763B2 (en) Method for automated window-level settings for magnetic resonance images
WO2021017297A1 (en) Artificial intelligence-based spine image processing method and related device
CA2188394C (en) Automated method and system for computerized detection of masses and parenchymal distortions in medical images
CN103026379B (en) The method calculating image noise level
CN103249358B (en) Medical image-processing apparatus
CN109410177A (en) A kind of image quality analysis method and system of super-resolution image
CN110458859B (en) Multi-sequence MRI-based multiple myeloma focus segmentation system
CN110910317B (en) Tongue image enhancement method
CN117237591A (en) Intelligent removal method for heart ultrasonic image artifacts
CN111951215A (en) Image detection method and device and computer readable storage medium
EP1577835A2 (en) X-ray image processing apparatus and method
CN114972067A (en) X-ray small dental film image enhancement method
Kesuma et al. Improved Chest X-Ray Image Quality Using Median and Gaussian Filter Methods
CN112767403A (en) Medical image segmentation model training method, medical image segmentation method and device
CN116029934A (en) Low-dose DR image and CT image denoising method
Yang et al. Fusion of CT and MR images using an improved wavelet based method
CN115205241A (en) Metering method and system for apparent cell density
CN114332255A (en) Medical image processing method and device
CN114418920B (en) Endoscope multi-focus image fusion method
CN111091514A (en) Oral cavity CBCT image denoising method and system
CN116883270B (en) Soft mirror clear imaging system for lithotripsy operation
CN117876402B (en) Intelligent segmentation method for temporomandibular joint disorder image
CN117237342B (en) Intelligent analysis method for respiratory rehabilitation CT image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant