CN113689424B - Ultrasonic inspection system capable of automatically identifying image features and identification method - Google Patents

Ultrasonic inspection system capable of automatically identifying image features and identification method Download PDF

Info

Publication number
CN113689424B
CN113689424B CN202111057492.4A CN202111057492A CN113689424B CN 113689424 B CN113689424 B CN 113689424B CN 202111057492 A CN202111057492 A CN 202111057492A CN 113689424 B CN113689424 B CN 113689424B
Authority
CN
China
Prior art keywords
image
ultrasonic
processing unit
gray
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111057492.4A
Other languages
Chinese (zh)
Other versions
CN113689424A (en
Inventor
钟华
粘永健
熊希
宫庆防
曹濒月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Third Military Medical University TMMU
Original Assignee
Third Military Medical University TMMU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Third Military Medical University TMMU filed Critical Third Military Medical University TMMU
Priority to CN202111057492.4A priority Critical patent/CN113689424B/en
Publication of CN113689424A publication Critical patent/CN113689424A/en
Application granted granted Critical
Publication of CN113689424B publication Critical patent/CN113689424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Abstract

The application provides an ultrasonic inspection system capable of automatically identifying image features and an identification method; the ultrasonic inspection system comprises an ultrasonic probe, a transmitting and receiving unit, a signal processing unit, an image processing unit and a display output unit which are sequentially connected; the ultrasonic probe is used for realizing conversion between the electric signal and the ultrasonic signal; the transmitting and receiving unit transmits ultrasonic signals and receives ultrasonic echo signals through the ultrasonic probe and transmits the ultrasonic echo signals to the signal processing unit; the signal processing unit demodulates the signals transmitted by the transmitting and receiving unit, recovers image information carried by the ultrasonic echo signals, synthesizes the image information into an ultrasonic imaging image and transmits the ultrasonic imaging image to the image processing unit; the image processing unit analyzes the ultrasonic imaging image, and comprises edge detection and image segmentation by using a fuzzy C-means algorithm and texture feature recognition by using a gray level differential matrix algorithm; the display output unit is used for displaying the ultrasonic imaging image.

Description

Ultrasonic inspection system capable of automatically identifying image features and identification method
Technical Field
The application relates to the technical field of medical image processing, in particular to an ultrasonic inspection system capable of automatically identifying image features and an identification method.
Background
The ultrasonic diagnostic apparatus is a mature noninvasive detection means at present, and can be used for detecting the space occupying lesion of partial abdominal organs. However, in the existing ultrasonic diagnostic apparatus, if the occupied lesion appears in the examination process, because the resolution and definition of the ultrasonic imaging image are insufficient, a doctor cannot preliminarily predict benign and malignant of the occupied lesion in a short time, and cannot well guide the next diagnostic scheme of a patient. If the tumor is early in malignant tumor, the diagnosis and treatment time is delayed because the conclusion is not clear after the ultrasonic examination, the patient is not clear of the tumor, the tumor property is not determined, the emergency is not happened, and the diagnosis and treatment time is delayed finally.
The prior art discloses a method for extracting liver regions in ultrasonic imaging images, but the proposal only uses FCM_1 algorithm to realize image segmentation, can extract more complete images, but can not further process the images and assist doctors in diagnosis; the prior art also discloses a full-automatic ultrasound contrast image segmentation method, which improves the brightness of a tissue organ in the image by adjusting gray values, but needs to use contrast agent, and has complex operation. Meanwhile, the prior art does not solve the technical problem that in the ultrasonic examination process, ultrasonic imaging images are subjected to real-time optimization processing so as to assist doctors in performing more accurate diagnosis on site.
There is a need for an ultrasound examination system that optimizes ultrasound imaging images in real time, increasing the resolution of the images, to aid in the diagnosis of the physician in the field during the examination.
Disclosure of Invention
Aiming at the defects in the prior art, the application provides an ultrasonic inspection system and an ultrasonic inspection method capable of automatically identifying image features, so as to solve the technical problems that in the prior art, in the ultrasonic inspection process, ultrasonic imaging images cannot be optimized in real time and doctors cannot be assisted in performing more accurate diagnosis on site.
The application provides an ultrasonic inspection system capable of automatically identifying image characteristics, which comprises an ultrasonic probe, a transmitting and receiving unit, a signal processing unit, an image processing unit and a display output unit which are sequentially connected;
the ultrasonic probe is used for realizing conversion between the electric signal and the ultrasonic signal;
the transmitting and receiving unit transmits ultrasonic signals and receives ultrasonic echo signals through the ultrasonic probe and transmits the ultrasonic echo signals to the signal processing unit;
the signal processing unit demodulates the signals transmitted by the transmitting and receiving unit, recovers image information carried by the ultrasonic echo signals, synthesizes the image information into an ultrasonic imaging image and transmits the ultrasonic imaging image to the image processing unit;
the image processing unit analyzes the ultrasonic imaging image, and comprises edge detection and image segmentation by using a fuzzy C-means algorithm and texture feature recognition by using a gray level differential matrix algorithm;
the display output unit is used for displaying the ultrasonic imaging image and displaying the texture features identified by the image processing unit in the form of feature marks.
Further, the signature display is displayed in an enlarged manner.
Further, the display output unit automatically pops up the display window of the enlarged texture feature.
Further, the feature labels are displayed as different textures features in different colors.
Furthermore, after using the fuzzy C-means algorithm, the image processing unit also uses the CANNY algorithm to carry out secondary edge detection and secondary image segmentation;
further, an identification method capable of automatically identifying image features comprises the following steps:
s1, calculating a clustering fuzzy membership matrix by using a fuzzy C-means algorithm on an ultrasonic imaging image;
s2, binarizing the fuzzy membership matrix to obtain a segmented target area image, wherein the target area image is obtained by the binarization processing
S3, storing the sum I of absolute differences of gray levels of the target area image in a matrix I;
s4, calculating the average gray value A of the adjacent pixels in the neighborhood according to the difference delta between the gray value of the image of the target area and the average gray value of the adjacent pixels in the neighborhood of the target area i
S5, averaging gray values A of adjacent pixels i Combining with the matrix I to quantize a gray differential matrix; the matrix parameters of the gray differential matrix comprise gray level, gray probability and sum of absolute difference values of gray level of each pixel in the image;
s6, calculating roughness and contrast of the image according to matrix parameters of the gray differential matrix, and finishing texture feature recognition.
Further, in S1, a fuzzy C-means algorithm is used, and the objective function is:
in the above formula, J (U, C) is the weighted square sum of each data point to each cluster center, S is the data set, N is the number of data in the data set, K is the number of cluster centers, U is Is a membership matrix, m is a weighted index, c is the center of each class, and x is the individual data points.
Further, in S1, after using the fuzzy C-means algorithm, the CANNY algorithm is also used to perform secondary edge detection and secondary image segmentation.
Further, in S4, the average gray value of the neighboring pixels in the neighborhood is calculated, which specifically includes: calculating average gray value A of adjacent pixels in the neighborhood in S4 i The calculation formula is as follows:
in the above formula, A i Is the average gray value, X gl Is the sum of pixels of the area image, x gl (j x ,j y ,j z )∈X gl Is the gray level of the pixel, K is the pixel gray level (j x ,j y ,j z ) Number of (j) x +k x )、(j y +k y )、(j z +k z ) The gray level and the number of the image pixels in the x direction, the y direction and the z direction are respectively represented, and delta is the difference between the gray level of the image of the target area and the average gray level of the adjacent pixels in the neighborhood of the target area.
Further, in S4, the sum of gray levels, gray probability, and absolute difference of gray levels of each pixel in the image is calculated; the specific method comprises the following steps: representing the image pixels by a matrix, and using the matrix in combination with the formula (5) to obtain quantization parameters of the gray differential matrix: gray level n of pixel i Gray probability p i Sum of absolute differences s of gray levels i
According to the technical scheme, the beneficial effects of the application are as follows:
1. the application can process the ultrasonic imaging image in real time in the ultrasonic examination process, and display the texture features identified by the image processing unit in a mode of multiple colors and amplification, so that a doctor can observe the focus more clearly and intuitively, and the doctor can be assisted in diagnosing more accurately in the examination process.
2. Performing edge detection and image segmentation on the ultrasonic imaging image by using a fuzzy C-means algorithm, so that the workload of subsequent texture feature recognition is reduced; the texture feature recognition is carried out by using the gray differential matrix algorithm, the image recognition and processing speed is high, the processing result can be obtained rapidly and accurately, and the method is suitable for the real-time requirement of the ultrasonic inspection site.
3. After the fuzzy C-means algorithm is used, the CANNY algorithm is also used for secondary edge detection and secondary image segmentation, so that a plurality of fuzzy areas at the edges of the image can be effectively eliminated after the image is segmented by the fuzzy C-means algorithm for the first time, and the image processing result is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. Like elements or portions are generally identified by like reference numerals throughout the several figures. In the drawings, elements or portions thereof are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of a system architecture according to the present application.
Fig. 2 is a flow chart of the method of the present application.
Detailed Description
Embodiments of the technical scheme of the present application will be described in detail below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present application, and thus are merely examples, and are not intended to limit the scope of the present application.
It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs.
Example 1
As shown in FIG. 1, the application provides an ultrasonic inspection system for automatically identifying image texture features, which comprises an ultrasonic probe, a transmitting and receiving unit, a signal processing unit, an image processing unit and a display output unit which are sequentially connected;
the ultrasonic probe is used for realizing conversion between the electric signal and the ultrasonic signal;
the transmitting and receiving unit transmits ultrasonic signals and receives ultrasonic echo signals through the ultrasonic probe and transmits the ultrasonic echo signals to the signal processing unit;
the signal processing unit demodulates the signals transmitted by the transmitting and receiving unit, recovers image information carried by the ultrasonic echo signals, synthesizes the image information into an ultrasonic imaging image and transmits the ultrasonic imaging image to the image processing unit;
the image processing unit analyzes the ultrasonic imaging image, and comprises edge detection and image segmentation by using a fuzzy C-means algorithm and texture feature recognition by using a gray level differential matrix algorithm;
the display output unit is used for displaying the ultrasonic imaging image and displaying the texture features identified by the image processing unit in a plurality of colors.
The following describes in detail the working principle of an ultrasonic inspection system for automatically recognizing image texture features:
the ultrasonic probe is also called as an ultrasonic transducer, can realize conversion between electric signals and ultrasonic signals, generates ultrasonic waves when receiving electric pulse driving, and transmits the ultrasonic waves to a part to be diagnosed; the ultrasonic echo signals reflected by the organs of the human body are converted into electric signals through the ultrasonic probe.
The transmitting and receiving unit transmits and receives ultrasonic signals through the ultrasonic probe, and performs electronic focusing and multi-point focusing on the transmitted and received ultrasonic signals. The ultrasonic echo signal received from the ultrasonic probe is filtered, amplified and various pre-processed in the transmitting-receiving unit, and then transmitted to the signal processing unit.
The signal processing unit demodulates the signal transmitted by the transmitting and receiving unit, recovers the image information carried by the ultrasonic echo signal and transmits the image information to the image processing unit.
The image processing unit analyzes the ultrasonic imaging picture, including edge detection, image segmentation and texture feature recognition, and then transmits the processed image to the display unit for display to assist doctor diagnosis. The display unit is preferably a color liquid crystal display. In the image processing process, the method comprises the following steps:
(1) Edge detection and image segmentation using fuzzy C-means algorithm
The fuzzy C-means algorithm is an algorithm that determines each data point to a certain degree of clustering with a degree of membership. The clustering algorithm is an improvement of the traditional hard clustering algorithm. The fuzzy C-means algorithm is used for gray image segmentation, is a process of calibrating after fuzzy clustering, and is very suitable for the characteristics of ambiguity and uncertainty existing in a gray image.
For the concepts of fuzzy sets and membership, if there is a number U (x) E [0,1] corresponding to any element x in the domain D, then U is called fuzzy set on D, and U (x) is called membership of x to D.
When x varies in D, U (x) is a function called the membership function of U. The closer the membership U (x) to 1 is, the higher the degree that x belongs to U, and the closer the membership U (x) to 0 is, the lower the degree that x belongs to U. The degree of the degree that x belongs to U is represented by a membership function U (x) with a value in a range (0, 1).
The fuzzy C-means algorithm uses a membership matrix u= [ U ] is ]To represent the degree of each object belonging to each class, the objective function of fuzzy C-means clustering is:
in the above formula, J (U, C) is the weighted square sum of each data point to each cluster center, S is the data set, N is the number of data in the data set, K is the number of cluster centers, U is Is a membership matrix, m is a weighted index, c is the center of each class, and x is the individual data points.
For an ultrasound imaging image transmitted to an image processing unit, the information of the image is set as a dataset X, x= (X) 1 ,x 2 …x n ) According to a certain characteristic of an image, gray scale is selected as the characteristic in the embodiment, and the data set X is divided into C classes, namely the C classes contain K clustering centers; the objective function is formed by weighted similarity measure between each pixel in the image and each cluster center, and the membership degree of each pixel in the image relative to each cluster center is calculated by using the formula (1). The fuzzy C-means algorithm then uses a multiple iterative optimization approach to minimize the objective function J (U, C) of fuzzy C-means clustering. Preferably, for C and m, generally C is much smaller than the total number of clustered sample data sets, but at the same time, C is guaranteed>1. For m, which is a flexible parameter of a control algorithm, if m is too large, the clustering effect is poor, in this embodiment, the c selection value is 2, and the m selection value is 2, so that after multiple iterative computation, 2 clustering fuzzy membership matrixes are obtained, which correspond to a target area and a background area approximate to an image to be segmented respectively, and then binarization processing is performed on the fuzzy membership matrixes, so that a segmented target area image is obtained, wherein the target area image is an image of a space-occupying lesion part.
The binarization processing is to binarize the gray level image to obtainBinarizing the image is advantageous in that the aggregate properties of the image are only related to the position of the point where the pixel value is 0 or 255, the multi-level value of the pixel is not involved, the processing is simplified, and the processing and compression amount of the data are small. In order to obtain an ideal binary image, a closed, connected boundary is generally used to define a non-overlapping region. All pixels with gray levels greater than or equal to the threshold are judged to belong to a specific object, the gray level value is 255, and pixels with gray levels less than the threshold are excluded from the object area, and the gray level value is 0, which represents the background area. In this embodiment, the threshold is selected using an iterative method. The iterative method is based on the successive approximation mode to acquire the threshold value, firstly, an initial threshold value T (j) is selected, and the average gray value of the whole image can be generally selected as the initial threshold value. J is the number of iterations, initially j=0; dividing the image into 2 regions by using T (j) to divide the imageAnd->Calculating the average gray value of the two areas according to the formula (2) and the formula (3):
then calculating a new threshold value according to a formula (4):
finally, j=j+1, and performing iterative calculation until the difference between T (j+1) and T (j) is smaller than a set value; in the present embodiment, the difference between T (j+1) and T (j) is set to 1.
In the case of a occupied lesion, both the physician and the patient wish to have a preliminary judgment of the benign malignancy of the tumor at the first time. The tumor image is characterized in that the gray level distribution is uneven, the gray level value of the blood vessels connected with the external tissues by partial tumors is possibly lower than the gray level value of the background, and the image of the target area and the peripheral tissues can be separated more quickly and accurately by using the fuzzy C-means algorithm.
(2) Texture feature recognition using gray differential matrix algorithm
For the segmented target region image, i.e., the image of the occupied lesion site, the doctor needs to carefully observe various lesion feature amounts of the image to perform diagnosis. In order to assist a doctor in quick judgment and avoid the doctor from neglecting some lesion characteristic quantities by artificial negligence, the application uses a gray level differential matrix algorithm to identify texture characteristics of an image occupying a lesion part. The gray level difference matrix algorithm is used for calculating the probability of gray level difference values in a certain neighborhood range, the association degree of different pixels in the neighborhood can be reflected, and the calculation method is a two-dimensional function relation, so that the calculation processing speed is high, and the image characteristics can be extracted rapidly and accurately.
Storing the sum I of absolute differences of gray levels of the target area image in a matrix I; in the present embodiment, 4×4 image pixels are represented by the following matrix I:
calculating the average gray value A of adjacent pixels in the neighborhood according to the difference delta between the gray value of the image of the target area and the average gray value of the adjacent pixels in the neighborhood i
In the above formula (5), A i Is the average gray value, X gl Is the sum of pixels of an area image, x gl (j x ,j y ,j z )∈X gl Is the gray level of the pixel, K is the pixel gray level (j x ,j y ,j z ) Number of (j) x +k x )、(j y +k y )、(j z +k z ) The gray level and the number of the image pixels in the x direction, the y direction and the z direction are respectively represented, and delta is the difference between the gray level of the image of the target area and the average gray level of the adjacent pixels in the neighborhood of the target area.
Using the matrix I, and a i Multiplying to quantize a gray differential matrix to obtain matrix parameters of the gray differential matrix:
wherein n is i Is the gray level, p, of the pixel i Is the gray probability s i Is the sum of absolute differences of gray levels.
In this embodiment, taking one pixel as an example, the adjacent pixels refer to 8 pixels around a certain pixel.
In texture recognition, we mainly analyze recognition roughness and contrast.
For roughness, the calculation was performed using the following formula (6):
for contrast, calculation was performed using the following formula (7):
through the process, the texture features of the image can be extracted, and different feature marks are respectively used for different texture features for distinguishing display, such as mark display with different colors. Preferably, the segmented occupancy lesion image and the identified texture feature may also be displayed in an enlarged manner by popping up a separate window on the color liquid crystal display. And the window is popped up, and the effect of automatic pop-up can be realized by using software programming.
Through the technical scheme of the embodiment 1, a doctor can observe the focus more clearly and intuitively, and the doctor is assisted in diagnosing more accurately in the examination process.
The texture feature image obtained after image processing, which is displayed on the display of the ultrasonic inspection system, is optimized image information, from which the ultrasonic inspection system cannot directly obtain a diagnosis result, and is a device for assisting a doctor in inspection and diagnosis. In the embodiment, compared with other space-occupying lesions, the method has better image segmentation and image feature recognition effects on the space-occupying lesions of the uterine cavity because of the individuation characteristics of the ultrasonic imaging image features of the space-occupying lesions of the uterine cavity.
Example 2
The image is segmented by using a fuzzy C-means algorithm, and although the image segmentation speed is high, a certain fuzzy area exists at the edge of the occupied lesion area image. In the subsequent texture feature recognition process, the fuzzy areas are mutually influenced by image blurring, the image information of the occupied lesion area and the image information of the peripheral area, so that the texture feature recognition result is influenced; it is necessary to reconfirm these blurred areas.
On the basis of embodiment 1, the image processing unit is further optimized to perform secondary edge detection and secondary image segmentation by using the CANNY algorithm after using the fuzzy C-means algorithm, so that some fuzzy areas at the edges of the image which remain after the image is segmented by using the fuzzy C-means algorithm for the first time can be effectively eliminated, and the image processing result is more accurate.
The CANNY algorithm is implemented by the following steps:
1. a gaussian filter is used to smooth the image and filter out noise.
Since the algorithm is based primarily on first and second differential operations of image intensity, and the derivatives are relatively sensitive to noise, the algorithm pre-processes the image data, typically by using filters to improve the performance of noise-related edge detection. When the CANNY algorithm is used for edge detection, convolution operation can be carried out on the original image data and the Gaussian template, and then a Gaussian smoothing filter is used for noise reduction. In general, the following 5×5 matrix may be selected as the gaussian template:
2. and calculating the gradient strength and the gradient direction of each pixel point in the image.
The gradient is calculated by using the existing first-order derivation operator, and common operators are Roberts operator, prewitt operator, sobel operator and the like. In this embodiment, a Sobel operator is selected and the following 3 matrix templates are used for calculation:
G x is a matrix for horizontal detection, G y Is a matrix for vertical direction detection, K is a neighborhood marking matrix, and according to the above 3 matrices, the gradient amplitude and the corresponding direction of the image can be calculated by using the formulas (8) and (9):
3. non-maximum suppression is applied to eliminate spurious responses from edge detection.
And the non-maximum value inhibition is to find the local maximum value of the pixel point, set the gray value corresponding to the maximum value point as the background pixel point, judge the local optimal value of the pixel neighborhood region meeting the gradient value as the edge of the pixel, inhibit the related information of other non-maximum values, exclude non-edge pixels and reserve the candidate image edge.
4. Dual threshold detection is applied to determine true and potential edges.
After applying the non-maximum suppression, the remaining pixels, while more accurately representing the actual edges in the image, still have some edge pixels due to noise and color variations. To address these spurious responses, a method of selecting a high and low threshold is used to overstep edge pixels with weak gradient values and preserve edge pixels of high gradient values. Specifically, if the gradient value of an edge pixel is higher than a high threshold, it is marked as a strong edge pixel, and if the gradient value of an edge pixel is smaller than the high threshold and larger than a low threshold, it is marked as a weak edge pixel. In this embodiment, the ratio of the high threshold to the low threshold is between 2:1 and 3:1.
Example 3
The application provides a method for image segmentation and image recognition of an ultrasonic imaging image, which comprises the following steps:
s1, calculating a clustering fuzzy membership matrix by using a fuzzy C-means algorithm on an ultrasonic imaging image;
s2, performing binarization processing on the fuzzy membership matrix to obtain a segmented target area image, wherein the target area image is an image of a space-occupying lesion part;
s3, storing the sum I of absolute differences of gray levels of the target area image in a matrix I;
s4, calculating the average gray value A of the adjacent pixels in the neighborhood according to the difference delta between the gray value of the image of the target area and the average gray value of the adjacent pixels in the neighborhood of the target area i
S5, averaging gray values A of adjacent pixels i Combining with the matrix I to quantize a gray differential matrix; the matrix parameters of the gray differential matrix comprise gray level, gray probability and sum of absolute difference values of gray level of each pixel in the image;
s6, calculating roughness and contrast of the image according to matrix parameters of the gray differential matrix, and finishing texture feature recognition.
In S1, a fuzzy C-means algorithm is used, whose objective function is:
in the above formula, J (U, C) is the weighted square sum of each data point to each cluster center, S is the data set, N is the number of data in the data set, K is the number of cluster centers, U is Is a membership matrix, m is a weighted index, c is the center of each class, and x is the individual data points.
For the above steps, after the fuzzy C-means algorithm is used in the step S1, the CANNY algorithm is further used to perform secondary edge detection and secondary image segmentation, so that some fuzzy areas at the edges of the image which remain after the image is segmented by the fuzzy C-means algorithm for the first time can be effectively eliminated, and the image processing result is more accurate.
In step S3, the average gray value of the neighboring pixels in the neighborhood is calculated, which specifically includes: according to the calculation formula of the average gray value of adjacent pixels in the neighborhood Calculating the average gray value of adjacent pixels; in the formula (5), A i Is the average gray value, X gl Is the sum of pixels of an area image, x gl (j x ,j y ,j z )∈X gl Is the gray level of the pixel, K is the pixel gray level (j x ,j y ,j z ) Is of (1)The number delta is the difference between the gray value of the image of the target area and the average gray value of the adjacent pixels in the neighborhood of the target area.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application, and are intended to be included within the scope of the appended claims and description.

Claims (7)

1. An identification method capable of automatically identifying image features is characterized by comprising the following steps:
s1, calculating a clustering fuzzy membership matrix by using a fuzzy C-means algorithm on an ultrasonic imaging image; when the fuzzy C-means algorithm is used, the objective function is as follows:
in the above formula, J (U, C) is the weighted square sum of each data point to each cluster center, S is the data set, N is the number of data in the data set, K is the number of cluster centers, U is Is a membership matrix, m is a weighted index, c is the center of each category, x is each data point;
s2, performing binarization processing on the fuzzy membership matrix to obtain a segmented target area image;
s3, storing the sum I of absolute differences of gray levels of the target area image in a matrix I;
s4, calculating the average gray value A of the adjacent pixels in the neighborhood according to the difference delta between the gray value of the image of the target area and the average gray value of the adjacent pixels in the neighborhood of the target area i The calculation formula is as follows:
in the above formula: a is that i Is the average gray value, X gl Is the sum of pixels of the area image, x gl (j x ,j y ,j z )∈X gl Is the gray level of the pixel, K is the gray level j of the pixel x ,j y ,j z Number j of (j) x +k x 、j y +k y 、j z +k z Respectively representing the gray level and the number of the image pixels in the x direction, the y direction and the z direction, wherein delta is the difference between the image gray value of the target area and the average gray value of the adjacent pixels in the neighborhood of the target area;
s5, averaging gray values A of adjacent pixels i Multiplying the matrix I to quantize a gray differential matrix; the matrix parameters of the gray differential matrix comprise gray level, gray probability and sum of absolute difference values of gray level of each pixel in the image;
s6, calculating roughness and contrast of the image according to matrix parameters of the gray differential matrix, and finishing texture feature recognition.
2. The method for automatically recognizing image features according to claim 1, wherein: and after the fuzzy C-means algorithm is used in the S1, the CANNY algorithm is also used for secondary edge detection and secondary image segmentation.
3. An ultrasonic inspection system capable of automatically identifying image features, characterized in that: for implementing the identification method of claim 1 or 2; the ultrasonic inspection system comprises an ultrasonic probe, a transmitting and receiving unit, a signal processing unit, an image processing unit and a display output unit which are sequentially connected;
the ultrasonic probe is used for realizing conversion between an electric signal and an ultrasonic signal;
the transmitting and receiving unit transmits ultrasonic signals and receives ultrasonic echo signals through the ultrasonic probe and transmits the ultrasonic echo signals to the signal processing unit;
the signal processing unit demodulates the signals transmitted by the transmitting and receiving unit, recovers image information carried by the ultrasonic echo signals, synthesizes the image information into an ultrasonic imaging image and transmits the ultrasonic imaging image to the image processing unit;
the image processing unit analyzes the ultrasonic imaging image, and comprises edge detection and image segmentation by using a fuzzy C-means algorithm and texture feature recognition by using a gray level differential matrix algorithm;
the display output unit is used for displaying the ultrasonic imaging image and also used for displaying the characteristic marks of the texture features identified by the image processing unit.
4. An ultrasound inspection system capable of automatically identifying image features according to claim 3, wherein: the signature display is displayed in an enlarged manner.
5. An ultrasound inspection system capable of automatically identifying image features according to claim 3, wherein: the display output unit automatically pops up a display window of the magnified texture feature.
6. An ultrasound inspection system capable of automatically identifying image features according to claim 3, wherein: the feature markers are displayed in different colors for different texture features.
7. An ultrasound inspection system capable of automatically identifying image features according to claim 3, wherein: and the image processing unit also uses a CANNY algorithm to carry out secondary edge detection and secondary image segmentation after using the fuzzy C-means algorithm.
CN202111057492.4A 2021-09-09 2021-09-09 Ultrasonic inspection system capable of automatically identifying image features and identification method Active CN113689424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111057492.4A CN113689424B (en) 2021-09-09 2021-09-09 Ultrasonic inspection system capable of automatically identifying image features and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111057492.4A CN113689424B (en) 2021-09-09 2021-09-09 Ultrasonic inspection system capable of automatically identifying image features and identification method

Publications (2)

Publication Number Publication Date
CN113689424A CN113689424A (en) 2021-11-23
CN113689424B true CN113689424B (en) 2023-11-24

Family

ID=78586197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111057492.4A Active CN113689424B (en) 2021-09-09 2021-09-09 Ultrasonic inspection system capable of automatically identifying image features and identification method

Country Status (1)

Country Link
CN (1) CN113689424B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830023B (en) * 2023-02-16 2023-05-09 山东中都机器有限公司 Image processing-based belt conveyor belt defect detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106618632A (en) * 2016-12-14 2017-05-10 无锡祥生医学影像有限责任公司 Ultrasonic imaging system and method with automatic optimization
CN107403438A (en) * 2017-08-07 2017-11-28 河海大学常州校区 Improve the ultrasonoscopy focal zone dividing method of fuzzy clustering algorithm
CN110349160A (en) * 2019-06-25 2019-10-18 电子科技大学 One kind is based on super-pixel and fuzzy C-means clustering SAR image segmentation method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITVA20020060A1 (en) * 2002-11-22 2004-05-23 St Microelectronics Srl METHOD OF ANALYSIS OF IMAGES DETECTED FROM A MICRO-ARRAY
CN100484479C (en) * 2005-08-26 2009-05-06 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic image enhancement and spot inhibition method
TWI483711B (en) * 2012-07-10 2015-05-11 Univ Nat Taiwan Tumor detection system and method of breast ultrasound image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106618632A (en) * 2016-12-14 2017-05-10 无锡祥生医学影像有限责任公司 Ultrasonic imaging system and method with automatic optimization
CN107403438A (en) * 2017-08-07 2017-11-28 河海大学常州校区 Improve the ultrasonoscopy focal zone dividing method of fuzzy clustering algorithm
CN110349160A (en) * 2019-06-25 2019-10-18 电子科技大学 One kind is based on super-pixel and fuzzy C-means clustering SAR image segmentation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
彩色多普勒超声诊断急性化脓性甲状腺炎合并脓肿1例;宫庆防;《医学理论与实践》;第30卷(第8期);第1095页 *

Also Published As

Publication number Publication date
CN113689424A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN106682435B (en) System and method for automatically detecting lesion in medical image through multi-model fusion
US8295565B2 (en) Method of image quality assessment to produce standardized imaging data
US5452367A (en) Automated method and system for the segmentation of medical images
KR101121353B1 (en) System and method for providing 2-dimensional ct image corresponding to 2-dimensional ultrasound image
US7903861B2 (en) Method for classifying breast tissue density using computed image features
CA2474417A1 (en) Image processing using measures of similarity
US20050129297A1 (en) Classification of breast lesion method and system
Hiremath et al. Follicle detection in ultrasound images of ovaries using active contours method
CN110772286A (en) System for discernment liver focal lesion based on ultrasonic contrast
US20100040263A1 (en) Methods for enhancing vascular patterns in cervical imagery
Koundal et al. Challenges and future directions in neutrosophic set-based medical image analysis
CN106327480B (en) Thyroid CT image abnormal density detection method
Ananth et al. Liver And Hepatic Tumors Segmentation in 3-D CT Images
CN113689424B (en) Ultrasonic inspection system capable of automatically identifying image features and identification method
Edwin et al. Liver and tumour segmentation from abdominal CT images using adaptive threshold method
CN116523802B (en) Enhancement optimization method for liver ultrasonic image
CN111127404B (en) Medical image contour rapid extraction method
CN116452523A (en) Ultrasonic image quality quantitative evaluation method
CN102124471A (en) Methods for enhancing vascular patterns in cervical imagery
CN113940704A (en) Thyroid-based muscle and fascia detection device
Ashame et al. Abnormality Detection in Eye Fundus Retina
CN114757894A (en) Bone tumor focus analysis system
Gopinath Tumor detection in prostate organ using canny edge detection technique
WO2001082787A2 (en) Method for determining the contour of an in vivo organ using multiple image frames of the organ
US20050141758A1 (en) Method, apparatus, and program for discriminating calcification patterns

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant