CN113689424A - Ultrasonic inspection system capable of automatically identifying image characteristics and identification method - Google Patents

Ultrasonic inspection system capable of automatically identifying image characteristics and identification method Download PDF

Info

Publication number
CN113689424A
CN113689424A CN202111057492.4A CN202111057492A CN113689424A CN 113689424 A CN113689424 A CN 113689424A CN 202111057492 A CN202111057492 A CN 202111057492A CN 113689424 A CN113689424 A CN 113689424A
Authority
CN
China
Prior art keywords
image
ultrasonic
processing unit
matrix
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111057492.4A
Other languages
Chinese (zh)
Other versions
CN113689424B (en
Inventor
钟华
粘永健
熊希
宫庆防
曹濒月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Third Military Medical University TMMU
Original Assignee
Third Military Medical University TMMU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Third Military Medical University TMMU filed Critical Third Military Medical University TMMU
Priority to CN202111057492.4A priority Critical patent/CN113689424B/en
Publication of CN113689424A publication Critical patent/CN113689424A/en
Application granted granted Critical
Publication of CN113689424B publication Critical patent/CN113689424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Primary Health Care (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Epidemiology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention provides an ultrasonic inspection system capable of automatically identifying image characteristics and an identification method; the ultrasonic inspection system comprises an ultrasonic probe, a transmitting and receiving unit, a signal processing unit, an image processing unit and a display output unit which are connected in sequence; the ultrasonic probe is used for realizing conversion between an electric signal and an ultrasonic signal; the transmitting and receiving unit transmits ultrasonic signals and receives ultrasonic echo signals through the ultrasonic probe and transmits the ultrasonic echo signals to the signal processing unit; the signal processing unit demodulates the signals transmitted by the transmitting and receiving unit, restores image information carried by ultrasonic echo signals, synthesizes the image information into an ultrasonic imaging image and transmits the ultrasonic imaging image to the image processing unit; the image processing unit analyzes the ultrasonic imaging image, and comprises edge detection and image segmentation by using a fuzzy C-means algorithm, and texture feature identification by using a gray difference matrix algorithm; the display output unit is used for displaying the ultrasonic imaging image.

Description

Ultrasonic inspection system capable of automatically identifying image characteristics and identification method
Technical Field
The invention relates to the technical field of medical image processing, in particular to an ultrasonic inspection system capable of automatically identifying image characteristics and an identification method.
Background
The ultrasonic diagnostic apparatus is a mature noninvasive detection means at present, and can be used for detecting the space occupying lesion of part of abdominal cavity viscera. However, in the existing ultrasonic diagnostic apparatus, if the space-occupying lesion appears in the process of examination, because the resolution and the definition of an ultrasonic imaging image are not enough, a doctor cannot preliminarily predict the benign and malignant degree of the space-occupying lesion in a short time, and cannot well guide the next diagnosis scheme of a patient. If the tumor is in an early stage, even the patient can not know the condition of the patient because the conclusion after the ultrasonic examination is not clear, the nature of the tumor is not determined, and the urgency is not met, and finally the gold time for diagnosis and treatment is delayed.
The prior art discloses a method for extracting a liver region in an ultrasonic imaging image, but the scheme only uses an FCM _1 algorithm to realize image segmentation, can extract a relatively complete image, but cannot further process the image and assist a doctor in diagnosis; the prior art also discloses a full-automatic ultrasonic contrast image segmentation method, which improves the brightness of part of tissues and organs in an image by adjusting the gray value, but needs to use a contrast agent, and has more complex operation. Meanwhile, the prior art does not solve the technical problem that the real-time optimization processing is carried out on the ultrasonic imaging image in the ultrasonic examination process so as to assist a doctor to carry out more accurate diagnosis on the spot.
Therefore, there is a need for an ultrasound examination system that can perform real-time optimization processing on ultrasound imaging images, and increase the image recognition to assist the doctor in performing on-site diagnosis during the examination process.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an ultrasonic examination system and an identification method capable of automatically identifying image characteristics, so as to solve the technical problems that in the prior art, an ultrasonic imaging image cannot be optimized in real time and a doctor cannot be assisted to carry out more accurate diagnosis on site in the ultrasonic examination process.
The invention provides an ultrasonic inspection system capable of automatically identifying image characteristics, which comprises an ultrasonic probe, a transmitting and receiving unit, a signal processing unit, an image processing unit and a display output unit which are connected in sequence;
the ultrasonic probe is used for realizing conversion between an electric signal and an ultrasonic signal;
the transmitting and receiving unit transmits ultrasonic signals and receives ultrasonic echo signals through the ultrasonic probe and transmits the ultrasonic echo signals to the signal processing unit;
the signal processing unit demodulates the signals transmitted by the transmitting and receiving unit, restores image information carried by ultrasonic echo signals, synthesizes the image information into an ultrasonic imaging image and transmits the ultrasonic imaging image to the image processing unit;
the image processing unit analyzes the ultrasonic imaging image, and comprises edge detection and image segmentation by using a fuzzy C-means algorithm, and texture feature identification by using a gray difference matrix algorithm;
the display output unit is used for displaying the ultrasonic imaging image and displaying the texture features identified by the image processing unit in a feature mark.
Further, the signature display is displayed in an enlarged manner.
Further, the display output unit automatically pops up a display window of the magnified texture feature.
Further, the feature labels are displayed in different colors for different texture features.
Further, after the image processing unit uses the fuzzy C mean algorithm, the CANNY algorithm is used for secondary edge detection and secondary image segmentation;
further, an identification method capable of automatically identifying image features comprises the following steps:
s1, calculating to obtain a clustering fuzzy membership matrix by using a fuzzy C-means algorithm on an ultrasonic imaging image;
s2, performing binarization processing on the fuzzy membership matrix to obtain a segmented target area image, wherein the target area image is
S3, storing the sum I of absolute differences of the gray levels of the target area image in a matrix I;
s4, calculating the average gray value A of the adjacent pixels in the neighborhood according to the difference delta between the gray value of the image of the target area and the average gray value of the adjacent pixels in the neighborhoodi
S5, averaging the average gray value A of the adjacent pixelsiCombining with the matrix I, and quantizing to obtain a gray level difference matrix; the matrix parameters of the gray level difference matrix comprise the gray level of each pixel in the image, the gray level probability and the sum of absolute differences of the gray levels;
and S6, calculating the roughness and the contrast of the image according to the matrix parameters of the gray level difference matrix, and finishing the texture feature identification.
Further, in S1, a fuzzy C-means algorithm is used, and its objective function is:
Figure BDA0003255266140000031
in the above equation, J (U, C) is the weighted sum of squares of the individual data points to each cluster center, S is the data set, N is the number of data in the data set, K is the number of cluster centers, UisIs the membership matrix, m is the weighted index, c is the center of each class, and x is the individual data point.
Further, after the fuzzy C-means algorithm is used in S1, a CANNY algorithm is also used for secondary edge detection and secondary image segmentation.
Further, in S4, the average gray-scale value of the neighboring pixels in the neighborhood is calculated, which specifically includes: in S4, the average gray value a of the neighboring pixels in the neighborhood is calculatediThe calculation formula is as follows:
Figure BDA0003255266140000032
in the above formula, AiIs the mean gray value, XglIs the sum of the pixels of the area image, xgl(jx,jy,jz)∈XglIs the gray level of the pixel, K is the pixel gray level (j)x,jy,jz) (j) ofx+kx)、(jy+ky)、(jz+kz) Respectively representing the gray level and the number of the image pixels in the x direction, the y direction and the z direction, wherein delta is the difference between the gray level value of the image in the target area and the average gray level value of the adjacent pixels in the neighborhood.
Further, in S4, the sum of the gray levels, the gray level probabilities, and the absolute differences of the gray levels of the respective pixels in the image is calculated; the specific method comprises the following steps: the image pixels are represented by a matrix, and the quantization parameters of the grayscale difference matrix are obtained by using the matrix in combination with equation (5): grey level n of a pixeliProbability of gray scale piSum of absolute differences s of gray levelsi
According to the technical scheme, the invention has the beneficial effects that:
1. the invention can process the ultrasonic imaging image in real time in the ultrasonic examination process, and display the textural features identified by the image processing unit in a multi-color and amplification mode, so that doctors can observe the focus more clearly and more intuitively, and the invention assists doctors to diagnose more accurately in the examination process.
2. Carrying out edge detection and image segmentation on the ultrasonic imaging image by using a fuzzy C-means algorithm, and reducing the workload of subsequent texture feature identification; the method has the advantages that the gray difference matrix algorithm is used for identifying the texture features, the image identification and processing speed is high, the processing result can be obtained quickly and accurately, and the method is suitable for the real-time requirement of an ultrasonic inspection site.
3. After the fuzzy C mean algorithm is used, the CANNY algorithm is used for secondary edge detection and secondary image segmentation, so that fuzzy areas at the edges of the image after the image is segmented by the fuzzy C mean algorithm for the first time can be effectively eliminated, and the image processing result is more accurate.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of a system architecture according to the present invention.
FIG. 2 is a flow chart of the method of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
Example 1
As shown in fig. 1, the present invention provides an ultrasonic inspection system for automatically identifying image texture features, which comprises an ultrasonic probe, a transmitting and receiving unit, a signal processing unit, an image processing unit and a display output unit, which are connected in sequence;
the ultrasonic probe is used for realizing conversion between an electric signal and an ultrasonic signal;
the transmitting and receiving unit transmits ultrasonic signals and receives ultrasonic echo signals through the ultrasonic probe and transmits the ultrasonic echo signals to the signal processing unit;
the signal processing unit demodulates the signals transmitted by the transmitting and receiving unit, restores image information carried by ultrasonic echo signals, synthesizes the image information into an ultrasonic imaging image and transmits the ultrasonic imaging image to the image processing unit;
the image processing unit analyzes the ultrasonic imaging image, and comprises edge detection and image segmentation by using a fuzzy C-means algorithm, and texture feature identification by using a gray difference matrix algorithm;
the display output unit is used for displaying the ultrasonic imaging image and displaying the texture features identified by the image processing unit in multiple colors.
The following describes the operation of the ultrasonic inspection system for automatically identifying the texture features of an image in detail:
the ultrasonic probe is also called as an ultrasonic transducer, can realize the conversion between electric signals/ultrasonic signals, generates ultrasonic waves when receiving electric pulse drive, and transmits the ultrasonic waves to a part to be diagnosed; the ultrasonic echo signals reflected by various organs of the human body are converted into electric signals by the ultrasonic probe.
The transmitting and receiving unit transmits and receives ultrasonic signals through the ultrasonic probe, and performs electronic focusing and multipoint focusing on the transmitted and received ultrasonic signals. The ultrasonic echo signals received from the ultrasonic probe are filtered, amplified and various kinds of preprocessing are performed in the transmitting-receiving unit, and then transmitted to the signal processing unit.
The signal processing unit demodulates the signals transmitted by the transmitting and receiving unit, restores the image information carried by the ultrasonic echo signals and transmits the image information to the image processing unit.
The image processing unit analyzes the image of the ultrasonic imaging, including edge detection, image segmentation and textural feature identification, and then transmits the processed image to the display unit for display to assist the diagnosis of doctors. The display unit is preferably a color liquid crystal display. In the image processing process, the method comprises the following steps:
(1) edge detection and image segmentation using fuzzy C-means algorithm
The fuzzy C-means algorithm is an algorithm that determines the degree of membership to which each data point belongs to a certain cluster. The clustering algorithm is an improvement of the traditional hard clustering algorithm. The fuzzy C-means algorithm is used for dividing the gray level image, is a process of calibrating after fuzzy clustering and is very suitable for the characteristics of fuzziness and uncertainty existing in the gray level image.
For the concept of fuzzy set and membership, if there is a number U (x) E [0, 1] corresponding to any element x in the domain D, U is called fuzzy set on D, and U (x) is called membership of x to D.
When x varies in D, U (x) is a function, called the membership function of U. A closer to 1 for the degree of membership U (x) indicates a higher degree of belonging of x to U, and a closer to 0 for U (x) indicates a lower degree of belonging of x to U. And (3) representing the degree of the X belonging to the U by using a membership function U (x) which takes values in an interval (0, 1).
The fuzzy C-means algorithm uses a membership matrix U ═ Uis]To represent the degree of each object belonging to each class, the objective function of the fuzzy C-means clustering is:
Figure BDA0003255266140000061
in the above equation, J (U, C) is the weighted sum of squares of the individual data points to each cluster center, S is the data set, N is the number of data in the data set, K is the number of cluster centers, UisIs the membership matrix, m is the weighted index, c is the center of each class, and x is the individual data point.
For an ultrasound imaging image transmitted to the image processing unit, the information of the image is set as a data set X, (X ═ X1,x2…xn) The fuzzy C-means algorithm selects gray scale as a feature in the embodiment according to a certain feature of the image, and divides the data set X into C types, namely the C types contain K clustering centers; the target function is formed by weighting similarity measure between each pixel in the image and each cluster center, and is calculated by using a formula (1) to calculate the membership of each pixel in the image relative to each cluster centerAnd (4) degree. Then, the fuzzy C-means algorithm uses a multi-iteration optimization mode to minimize an objective function J (U, C) of the fuzzy C-means clustering. Preferably, for C and m, C is generally much smaller than the total number of the cluster sample data sets, but C is guaranteed to be the same>1. For m, the selected value c is 2, and the selected value m is 2, so that after multiple iterative computations, 2 fuzzy clustering membership degree matrixes are obtained, which respectively correspond to a target area and a background area which are approximate to an image to be segmented, and then binarization processing is performed on the fuzzy membership degree matrixes to obtain a segmented target area image, wherein the target area image is an image of an occupied pathological part.
The binarization processing is to binarize the gray level image to obtain a binarized image, so that the set property of the image is only related to the position of a point with a pixel value of 0 or 255 when the image is further processed, the multi-level value of the pixel is not related, the processing is simple, and the processing and compression amount of data are small. In order to obtain an ideal binary image, a non-overlapping region is generally defined by closed and connected boundaries. All the pixel points with the gray levels larger than or equal to the threshold are judged to belong to the specific object, the gray level value of the pixel points is 255, the pixel points smaller than the threshold are excluded from the object area, the gray level value is 0, and the background area is represented. In this embodiment, the threshold is selected using an iterative method. The iterative method is to obtain a threshold value based on a successive approximation method, and first, an initial threshold value t (j) is selected, and usually, an average gray value of an overall image can be selected as the initial threshold value. J is the number of iterations, and J is 0 initially; dividing the image into 2 regions by using T (j) segmentation image
Figure BDA0003255266140000071
And
Figure BDA0003255266140000072
calculating the average gray value of the two areas according to the formula (2) and the formula (3):
Figure BDA0003255266140000073
Figure BDA0003255266140000074
then, a new threshold value is calculated according to the formula (4):
Figure BDA0003255266140000075
finally, j is made to be j +1, and iterative calculation is carried out until the difference between T (j +1) and T (j) is smaller than a set value; in this embodiment, the difference between T (j +1) and T (j) is set to 1.
In space-occupying lesions, in the case of tumors, both the physician and the patient wish to have a preliminary assessment of the malignancy and malignancy of the tumor at the first time. The tumor image is characterized in that the gray level distribution is uneven, the gray level value of blood vessels connecting part of the tumor and external tissues is possibly lower than that of the background, and the target area image and the peripheral tissues can be segmented more quickly and accurately by using the fuzzy C mean algorithm.
(2) Texture feature identification using grayscale difference matrix algorithm
For the segmented target region image, i.e. the image of the occupied lesion, the doctor needs to carefully observe various lesion feature quantities of the image for diagnosis. In order to assist a doctor to judge quickly and avoid that the doctor overlooks some lesion characteristic quantities due to manual overlooking, the invention uses a gray difference matrix algorithm to identify the texture characteristics of the image of the space-occupying lesion part. The gray level difference matrix algorithm is used for calculating the probability of the occurrence of the gray level difference value in a certain neighborhood range, reflecting the correlation degree of different pixels in the neighborhood, and the calculation method is a two-dimensional functional relation, so that the calculation processing speed is high, and the image characteristics can be quickly and accurately extracted.
Storing the sum I of the absolute differences of the gray levels of the target area image in a matrix I; in the present embodiment, the 4 × 4 image pixels are represented by the following matrix I:
Figure BDA0003255266140000081
calculating the average gray value A of the adjacent pixels in the neighborhood according to the difference delta between the gray value of the image in the target area and the average gray value of the adjacent pixels in the neighborhoodi
Figure BDA0003255266140000082
In the above formula (5), AiIs the mean gray value, XglIs the sum of pixels, x, of an image of a certain areagl(jx,jy,jz)∈XglIs the gray level of the pixel, K is the pixel gray level (j)x,jy,jz) (j) ofx+kx)、(jy+ky)、(jz+kz) Respectively representing the gray level and the number of the image pixels in the x direction, the y direction and the z direction, wherein delta is the difference between the gray level value of the image in the target area and the average gray level value of the adjacent pixels in the neighborhood.
Using the matrices I, and AiMultiplying, quantizing the gray level difference matrix to obtain the matrix parameters of the gray level difference matrix:
Figure BDA0003255266140000083
wherein n isiIs the gray level of the pixel, piIs the gray level probability, siIs the sum of the absolute differences of the grey levels.
In this embodiment, taking a pixel as an example, the neighboring pixels refer to 8 pixels around a certain pixel.
In texture recognition, we mainly analyze and recognize roughness and contrast.
For the roughness, the following equation (6) is used for calculation:
Figure BDA0003255266140000084
for contrast, the following equation (7) is used for calculation:
Figure BDA0003255266140000091
through the process, the texture features of the image can be extracted, and then different feature marks are respectively used for different texture features to be distinguished and displayed, for example, different colors are used for marking and displaying. Preferably, the segmented space-occupying lesion image and the identified texture feature can be displayed by popping up a separate window on a color liquid crystal display in an enlarged manner. And 3, popping up the window, and using software programming to realize the effect of automatic popping up.
Through the technical scheme of the embodiment 1, doctors can observe the focus more clearly and more intuitively, and the doctors are assisted to diagnose more accurately in the examination process.
It should be noted that the texture feature image obtained by image processing and displayed on the display of the ultrasound examination system is an optimized image information from which the ultrasound examination system cannot directly obtain the diagnosis result, and the ultrasound examination system is a device for assisting the doctor in examination and diagnosis. In this embodiment, compared with other space-occupying lesions, due to the individuation characteristics of the ultrasonic imaging image features of the space-occupying tumor in the uterine cavity, the image segmentation and image feature identification effects of the space-occupying tumor in the uterine cavity are better.
Example 2
The image is segmented by using a fuzzy C-means algorithm, and although the image segmentation speed is high, some fuzzy areas exist at the edge of the image of the placeholder lesion area. In the subsequent texture feature identification process of the fuzzy regions, because of image blurring, the image information of the occupied pathological region and the image information of the peripheral region are mutually influenced, so that the texture feature identification result is influenced; it is necessary to confirm these blurred regions again.
On the basis of the embodiment 1, the optimization is further performed, after the image processing unit uses the fuzzy C-means algorithm, the CANNY algorithm is used for secondary edge detection and secondary image segmentation, some fuzzy areas at the edge of the image which are remained after the image is segmented by the fuzzy C-means algorithm for the first time can be effectively eliminated, and the image processing result is more accurate.
The CANNY algorithm is implemented by the following steps:
1. a gaussian filter is used to smooth the image and filter out noise.
Since the algorithm is based primarily on first and second order differential operations of image intensity, and the derivatives are sensitive to noise, the algorithm preprocesses the image data, typically using filters to improve the performance of noise-related edge detection. When the CANNY algorithm is used for edge detection, convolution operation can be carried out on original image data and a Gaussian template, and then a Gaussian smoothing filter is used for noise reduction. In general, the following 5 × 5 matrix can be selected as the gaussian template:
Figure BDA0003255266140000101
2. and calculating the gradient strength and the direction of each pixel point in the image.
The gradient is calculated by utilizing the existing first-order partial derivative operator, and the commonly used operators comprise a Roberts operator, a Prewitt operator, a Sobel operator and the like. In this embodiment, Sobel operator is selected and calculated by using the following 3 matrix templates:
Figure BDA0003255266140000102
Figure BDA0003255266140000103
Figure BDA0003255266140000104
Gxis a matrix for horizontal direction detection, GyIs a matrix for detecting the vertical direction, K is a neighborhood label matrix, and according to the above 3 matrices, the gradient magnitude and the corresponding direction of the image can be calculated by using formula (8) and formula (9):
Figure BDA0003255266140000105
Figure BDA0003255266140000106
3. non-maxima suppression is applied to eliminate spurious responses from edge detection.
The non-maximum value inhibition is to find the local maximum value of the pixel point, set the gray value corresponding to the maximum value point as the background pixel point, judge the local optimum value of the pixel neighborhood region satisfying the gradient value as the edge of the pixel, inhibit the relevant information of the rest non-maximum values, exclude the non-edge pixels and keep the edge of the candidate image.
4. Double threshold detection is applied to determine true and potential edges.
After applying non-maxima suppression, the remaining pixels, while more accurately representing the actual edges in the image, still have some edge pixels due to noise and color variations. To address these spurious responses, a method of selecting high and low thresholds is used to over-run the edge pixels with weak gradient values and retain the edge pixels with high gradient values. Specifically, an edge pixel is labeled as a strong edge pixel if its gradient value is above a high threshold and as a weak edge pixel if its gradient value is less than the high threshold and greater than a low threshold. In this embodiment, the ratio of the high threshold to the low threshold is between 2:1 and 3: 1.
Example 3
The invention provides a method for carrying out image segmentation and image recognition on an ultrasonic imaging image, which comprises the following steps of:
s1, calculating to obtain a clustering fuzzy membership matrix by using a fuzzy C-means algorithm on an ultrasonic imaging image;
s2, performing binarization processing on the fuzzy membership matrix to obtain a segmented target area image, wherein the target area image is an image of an occupied lesion part;
s3, storing the sum I of absolute differences of the gray levels of the target area image in a matrix I;
s4, calculating the average gray value A of the adjacent pixels in the neighborhood according to the difference delta between the gray value of the image of the target area and the average gray value of the adjacent pixels in the neighborhoodi
S5, averaging the average gray value A of the adjacent pixelsiCombining with the matrix I, and quantizing to obtain a gray level difference matrix; the matrix parameters of the gray level difference matrix comprise the gray level of each pixel in the image, the gray level probability and the sum of absolute differences of the gray levels;
and S6, calculating the roughness and the contrast of the image according to the matrix parameters of the gray level difference matrix, and finishing the texture feature identification.
In S1, a fuzzy C-means algorithm is used, the objective function of which is:
Figure BDA0003255266140000111
in the above equation, J (U, C) is the weighted sum of squares of the individual data points to each cluster center, S is the data set, N is the number of data in the data set, K is the number of cluster centers, UisIs the membership matrix, m is the weighted index, c is the center of each class, and x is the individual data point.
For the above steps, further optimization can be performed, after the fuzzy C-means algorithm is used in step S1, a CANNY algorithm is also used for secondary edge detection and secondary image segmentation, which can effectively eliminate some fuzzy areas at the edges of the image that remain after the image is segmented for the first time by using the fuzzy C-means algorithm, so that the image processing result is more accurate.
In step S3, the average gray level of the neighboring pixels in the neighborhood is calculated, which specifically includes: calculating formula according to average gray value of adjacent pixels in neighborhood
Figure BDA0003255266140000121
Figure BDA0003255266140000122
Calculating the average gray value of the adjacent pixels; in the formula (5), AiIs the mean gray value, XglIs the sum of pixels, x, of an image of a certain areagl(jx,jy,jz)∈XglIs the gray level of the pixel, K is the pixel gray level (j)x,jy,jz) Delta is the difference between the gray value of the image in the target area and the average gray value of the adjacent pixels in the neighborhood.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (9)

1. An ultrasonic inspection system capable of automatically identifying image features, characterized in that: the ultrasonic probe comprises an ultrasonic probe, a transmitting and receiving unit, a signal processing unit, an image processing unit and a display output unit which are sequentially connected;
the ultrasonic probe is used for realizing conversion between an electric signal and an ultrasonic signal;
the transmitting and receiving unit transmits ultrasonic signals and receives ultrasonic echo signals through the ultrasonic probe and transmits the ultrasonic echo signals to the signal processing unit;
the signal processing unit demodulates the signals transmitted by the transmitting and receiving unit, restores image information carried by ultrasonic echo signals, synthesizes the image information into an ultrasonic imaging image and transmits the ultrasonic imaging image to the image processing unit;
the image processing unit analyzes the ultrasonic imaging image, and comprises edge detection and image segmentation by using a fuzzy C-means algorithm, and texture feature identification by using a gray difference matrix algorithm;
the display output unit is used for displaying the ultrasonic imaging image and also used for displaying the characteristic mark of the texture characteristic identified by the image processing unit.
2. An ultrasound inspection system with automatic image feature identification according to claim 1, wherein: the signature display is displayed in an enlarged manner.
3. An ultrasound inspection system with automatic image feature identification according to claim 2, wherein: and the display output unit automatically pops up a display window of the amplified texture features.
4. An ultrasound inspection system with automatic image feature identification according to claim 1, wherein: the feature labels are displayed in different colors for different textural features.
5. An ultrasonic inspection system capable of automatically recognizing image features according to claim 1 or 2, wherein: and the image processing unit also performs secondary edge detection and secondary image segmentation by using a CANNY algorithm after using the fuzzy C-means algorithm.
6. An identification method capable of automatically identifying image features is characterized by comprising the following steps:
s1, calculating to obtain a clustering fuzzy membership matrix by using a fuzzy C-means algorithm on an ultrasonic imaging image;
s2, performing binarization processing on the fuzzy membership matrix to obtain a segmented target area image;
s3, storing the sum I of absolute differences of the gray levels of the target area image in a matrix I;
s4, calculating the average gray value A of the adjacent pixels in the neighborhood according to the difference delta between the gray value of the image of the target area and the average gray value of the adjacent pixels in the neighborhoodi
S5, averaging the average gray value A of the adjacent pixelsiMultiplying the matrix I by the matrix I to quantize a gray level difference matrix; the matrix parameters of the gray level difference matrix comprise the gray level of each pixel in the image, the gray level probability and the sum of absolute differences of the gray levels;
and S6, calculating the roughness and the contrast of the image according to the matrix parameters of the gray level difference matrix, and finishing the texture feature identification.
7. An identification method capable of automatically identifying image features according to claim 6, wherein: in S1, a fuzzy C-means algorithm is used, the objective function of which is:
Figure FDA0003255266130000021
in the above equation, J (U, C) is the weighted sum of squares of the individual data points to each cluster center, S is the data set, N is the number of data in the data set, K is the number of cluster centers, UisIs the membership matrix, m is the weighted index, c is the center of each class, and x is the individual data point.
8. An identification method capable of automatically identifying image features according to claim 6, wherein: after the fuzzy C-means algorithm is used in S1, a CANNY algorithm is also used for secondary edge detection and secondary image segmentation.
9. An identification method capable of automatically identifying image features according to claim 6, wherein: in S4, in-neighborhood neighbors are calculatedPixel mean gray value AiThe calculation formula is as follows:
Figure FDA0003255266130000022
in the above formula: a. theiIs the mean gray value, XglIs the sum of the pixels of the area image, xgl(jx,jy,jz)∈XglIs the gray level of the pixel, K is the pixel gray level (j)x,jy,jz) (j) ofx+kx)、(jy+ky)、(jz+kz) Respectively representing the gray level and the number of the image pixels in the x direction, the y direction and the z direction, wherein delta is the difference between the gray level value of the image in the target area and the average gray level value of the adjacent pixels in the neighborhood.
CN202111057492.4A 2021-09-09 2021-09-09 Ultrasonic inspection system capable of automatically identifying image features and identification method Active CN113689424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111057492.4A CN113689424B (en) 2021-09-09 2021-09-09 Ultrasonic inspection system capable of automatically identifying image features and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111057492.4A CN113689424B (en) 2021-09-09 2021-09-09 Ultrasonic inspection system capable of automatically identifying image features and identification method

Publications (2)

Publication Number Publication Date
CN113689424A true CN113689424A (en) 2021-11-23
CN113689424B CN113689424B (en) 2023-11-24

Family

ID=78586197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111057492.4A Active CN113689424B (en) 2021-09-09 2021-09-09 Ultrasonic inspection system capable of automatically identifying image features and identification method

Country Status (1)

Country Link
CN (1) CN113689424B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830023A (en) * 2023-02-16 2023-03-21 山东中都机器有限公司 Image processing-based method for detecting defects of transmission belt of rubber belt conveyor
CN117912643A (en) * 2023-03-28 2024-04-19 西弥斯医疗科技(湖南)有限公司 Intelligent treatment system based on skin detection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040151383A1 (en) * 2002-11-22 2004-08-05 Stmicroelectronics, S.R.L. Method for the analysis of micro-array images and relative device
US20070065009A1 (en) * 2005-08-26 2007-03-22 Shenzhen Mindray Bio-Medical Electronics Co., Ltd. Ultrasound image enhancement and speckle mitigation method
US20140018681A1 (en) * 2012-07-10 2014-01-16 National Taiwan University Ultrasound imaging breast tumor detection and diagnostic system and method
CN106618632A (en) * 2016-12-14 2017-05-10 无锡祥生医学影像有限责任公司 Ultrasonic imaging system and method with automatic optimization
CN107403438A (en) * 2017-08-07 2017-11-28 河海大学常州校区 Improve the ultrasonoscopy focal zone dividing method of fuzzy clustering algorithm
CN110349160A (en) * 2019-06-25 2019-10-18 电子科技大学 One kind is based on super-pixel and fuzzy C-means clustering SAR image segmentation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040151383A1 (en) * 2002-11-22 2004-08-05 Stmicroelectronics, S.R.L. Method for the analysis of micro-array images and relative device
US20070065009A1 (en) * 2005-08-26 2007-03-22 Shenzhen Mindray Bio-Medical Electronics Co., Ltd. Ultrasound image enhancement and speckle mitigation method
US20140018681A1 (en) * 2012-07-10 2014-01-16 National Taiwan University Ultrasound imaging breast tumor detection and diagnostic system and method
CN106618632A (en) * 2016-12-14 2017-05-10 无锡祥生医学影像有限责任公司 Ultrasonic imaging system and method with automatic optimization
CN107403438A (en) * 2017-08-07 2017-11-28 河海大学常州校区 Improve the ultrasonoscopy focal zone dividing method of fuzzy clustering algorithm
CN110349160A (en) * 2019-06-25 2019-10-18 电子科技大学 One kind is based on super-pixel and fuzzy C-means clustering SAR image segmentation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宫庆防: "彩色多普勒超声诊断急性化脓性甲状腺炎合并脓肿1例", 《医学理论与实践》, vol. 30, no. 8, pages 1095 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830023A (en) * 2023-02-16 2023-03-21 山东中都机器有限公司 Image processing-based method for detecting defects of transmission belt of rubber belt conveyor
CN117912643A (en) * 2023-03-28 2024-04-19 西弥斯医疗科技(湖南)有限公司 Intelligent treatment system based on skin detection
CN117912643B (en) * 2023-03-28 2024-06-25 西弥斯医疗科技(湖南)有限公司 Intelligent treatment system based on skin detection

Also Published As

Publication number Publication date
CN113689424B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN106682435B (en) System and method for automatically detecting lesion in medical image through multi-model fusion
US8295565B2 (en) Method of image quality assessment to produce standardized imaging data
US7260248B2 (en) Image processing using measures of similarity
Goswami et al. Brain tumour detection using unsupervised learning based neural network
CN110772286B (en) System for discernment liver focal lesion based on ultrasonic contrast
KR20230059799A (en) A Connected Machine Learning Model Using Collaborative Training for Lesion Detection
CN113689424B (en) Ultrasonic inspection system capable of automatically identifying image features and identification method
JPH09508814A (en) Automatic method and system for segmenting medical images
Hiremath et al. Follicle detection in ultrasound images of ovaries using active contours method
CN116452523A (en) Ultrasonic image quality quantitative evaluation method
Edwin et al. Liver and tumour segmentation from abdominal CT images using adaptive threshold method
CN116777962A (en) Two-dimensional medical image registration method and system based on artificial intelligence
CN113940702B (en) Thyroid nodule echo analysis device
CN113940704A (en) Thyroid-based muscle and fascia detection device
Gopinath Tumor detection in prostate organ using canny edge detection technique
Pavel et al. Cancer detection using image processing techniques based on cell counting, cell area measurement and clump detection
CN113940701B (en) Thyroid-based tracheal and vascular detection device
CN116152253B (en) Cardiac magnetic resonance mapping quantification method, system and storage medium
dos Santos Filho et al. A study on intravascular ultrasound image processing
CN118212235B (en) Capsule endoscope image screening method and system
CN118447014B (en) Barium meal contrast image focus identification system for auxiliary diagnosis of digestive system department
Jeevitha et al. Mammogram Images Using Noise Removal of Filtering Techniques
Likitha et al. Image Segmentation Techniques for Bone Cancer Identification in X-ray and MRI Imagery
Permana et al. Classification of Cervical Intraepithelial Neoplasia Based on Combination of GLCM and L* a* b* on Colposcopy Image Using Machine Learning
Mahaveera et al. Pectoral Muscle Segmentation from Digital Mammograms Using a Transformative Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant