CN109754407B - Ultrasonic image processing method, device and equipment - Google Patents

Ultrasonic image processing method, device and equipment Download PDF

Info

Publication number
CN109754407B
CN109754407B CN201910021838.1A CN201910021838A CN109754407B CN 109754407 B CN109754407 B CN 109754407B CN 201910021838 A CN201910021838 A CN 201910021838A CN 109754407 B CN109754407 B CN 109754407B
Authority
CN
China
Prior art keywords
pixel point
coherence factor
ultrasonic image
determining
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910021838.1A
Other languages
Chinese (zh)
Other versions
CN109754407A (en
Inventor
于琦
马克涛
金阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Medical Equipment Co Ltd
Original Assignee
Qingdao Hisense Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Medical Equipment Co Ltd filed Critical Qingdao Hisense Medical Equipment Co Ltd
Priority to CN201910021838.1A priority Critical patent/CN109754407B/en
Publication of CN109754407A publication Critical patent/CN109754407A/en
Application granted granted Critical
Publication of CN109754407B publication Critical patent/CN109754407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention provides an ultrasonic image processing method, device and equipment. The method comprises the following steps: acquiring a coherence factor of each pixel point of an ultrasonic image; determining the reliability value of each pixel point according to the coherence factor; and processing the ultrasonic image according to the credibility value of each pixel point. The method of the embodiment of the invention improves the image quality of the ultrasonic image.

Description

Ultrasonic image processing method, device and equipment
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an ultrasonic image processing method, device and equipment.
Background
Different time delays are introduced to each element of the array, and the array sound field after time Delay processing is accumulated, namely Delay and Sum (DAS), so that imaging of each point in the sound field is realized, and the method is the most widely applied ultrasonic imaging mode at present. If the reflected wave is received from the focus, the phases of the delay processed multiple element receiving signals are the same, and high-amplitude beam data can be obtained; on the contrary, only beam data having a low amplitude can be obtained due to the difference in the phases of the signals received by the plurality of elements. The non-uniformity of sound velocity and the side lobe components will cause the quality of the ultrasound image to degrade.
In order to improve the quality of an ultrasound image, in the prior art, based on the correlation between the degree of sound velocity non-uniformity or the degree of sidelobe component intensity and the coherence factor, a point with non-uniform sound velocity or a point with strong sidelobe component intensity is suppressed, for example, a smaller amount may be used to suppress the amplitude after beam synthesis. However, the low pixel amplitude in the ultrasound image is also correct information, the forced reduction of the amplitude of the point with uneven sound velocity or the point with strong sidelobe component will be confused with the original low amplitude region, and will introduce wrong information, increase the image ambiguity, and at the same time, introduce a lot of small black holes.
Disclosure of Invention
The embodiment of the invention provides an ultrasonic image processing method, device and equipment, which are used for solving the relevant problems caused by inhibiting points with uneven sound velocity or points with strong sidelobe components in the prior art so as to improve the quality of an ultrasonic image.
In a first aspect, an embodiment of the present invention provides an ultrasound image processing method, including:
acquiring a coherence factor of each pixel point of an ultrasonic image;
determining the reliability value of each pixel point according to the coherence factor;
and processing the ultrasonic image according to the credibility value of each pixel point.
In a possible implementation manner, obtaining a coherence factor of each pixel point of an ultrasound image includes:
acquiring an enhancement area and a sound absorption area of an ultrasonic image;
and determining the coherence factor of each pixel point in the enhancement area and the sound absorption area.
In a possible implementation manner, determining a coherence factor of each pixel point in the enhancement region includes:
determining the coherence factor of each pixel point in the enhancement region according to the following formula:
Figure BDA0001941023410000021
wherein z represents a lateral coordinate, l represents a longitudinal coordinate, xn(z, l) represents a pixel value at a position (z, l) in the image of the nth frame, N represents the number of images, coh1(z, l) represents the coherence factor of the pixel (z, l) in the enhancement region.
In a possible implementation manner, determining a coherence factor of each pixel point in the sound absorption region includes:
determining the coherence factor of each pixel point in the sound absorption area according to the following formula:
Figure BDA0001941023410000022
wherein z represents a lateral coordinate, l represents a longitudinal coordinate, xn(z, l) represents a pixel value at a position (z, l) in the image of the nth frame, N represents the number of images, C is a constant, and C is equal to [0, 1 ]],coh2(z, l) represents the coherence factor of the pixel point (z, l) in the acoustic region.
In a possible implementation manner, determining the confidence value of each pixel point according to the coherence factor includes:
determining the binarization credibility value of each pixel point according to the following formula:
Figure BDA0001941023410000023
wherein coh (z, l) represents the coherence factor of the pixel point (z, l), T is a preset coherence factor threshold, conbinAnd (z, l) represents the binarization credibility value of the pixel point (z, l).
In a possible implementation manner, determining the confidence value of each pixel point according to the coherence factor includes:
carrying out image segmentation on the ultrasonic image according to a maximum inter-class variance method and an active contour model method;
and determining the reliability value of each pixel point according to the segmented ultrasonic image and the coherence factor.
In a possible implementation manner, processing the ultrasound image according to the reliability value of each pixel point includes:
carrying out post-processing on a first area of the ultrasonic image, wherein the first area is an area formed by pixel points of which the reliability values are greater than or equal to a preset reliability threshold value in the ultrasonic image;
and according to the post-processed first area, interpolating a second area of the ultrasonic image, wherein the second area is an area formed by pixel points of which the reliability values are smaller than a preset reliability threshold value in the ultrasonic image.
In a second aspect, an embodiment of the present invention provides an ultrasound image processing apparatus, including:
the acquisition module is used for acquiring the coherence factor of each pixel point of the ultrasonic image;
the determining module is used for determining the reliability value of each pixel point according to the coherence factor;
and the processing module is used for processing the ultrasonic image according to the credibility value of each pixel point.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the ultrasound image processing method of any of the first aspects.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the ultrasound image processing method according to any one of the first aspect.
According to the ultrasonic image processing method, the ultrasonic image processing device and the ultrasonic image processing equipment, the coherence factor of each pixel point of the ultrasonic image is obtained, the reliability value of each pixel point is determined according to the coherence factor, the ultrasonic image is processed according to the reliability value of each pixel point, and the quality of the ultrasonic image is improved. According to the method and the device, the credibility value of the pixel point is determined through the coherence factor, the ultrasonic image is processed according to the credibility value, and the problems that in the prior art, wrong information is introduced due to a processing mode based on inhibition, the image ambiguity is increased, and a large number of small black holes are introduced are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flowchart illustrating an ultrasound image processing method according to an embodiment of the present invention;
fig. 2A to 2C are schematic diagrams of reflected waves of a strong reflector;
FIG. 3 is a diagram of an ultrasound system in the prior art;
FIG. 4 is a schematic block diagram of an ultrasound system according to an embodiment of the present invention;
FIGS. 5A-5B are schematic views illustrating the effect of processing the enhancement region;
fig. 6A to 6B are schematic diagrams illustrating the effect of processing the sound absorption region;
FIG. 7 is a schematic structural diagram of an ultrasound image processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an embodiment of an electronic device provided in the present invention.
With the above figures, certain embodiments of the invention have been illustrated and described in more detail below. The drawings and the description are not intended to limit the scope of the inventive concept in any way, but rather to illustrate it by those skilled in the art with reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terms "comprising" and "having," and any variations thereof, in the description and claims of this invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The terms "first" and "second" in the present application are used for identification purposes only and are not to be construed as indicating or implying a sequential relationship, relative importance, or implicitly indicating the number of technical features indicated. "plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
Fig. 1 is a flowchart of an ultrasound image processing method according to an embodiment of the present invention. As shown in fig. 1, the method of this embodiment may include:
s101, obtaining a coherence factor of each pixel point of the ultrasonic image.
The coherence factor in this embodiment may be used to measure coherence of the echo signal, and may be used to determine the focus quality, for example, may be defined as a ratio of coherent signal energy to total signal energy in the echo signal. In this embodiment, the pixel point of the ultrasound image may be uniquely identified by using the horizontal coordinate and the vertical coordinate of the pixel point.
The principle of the coherence factor will be explained below with reference to the drawings. Fig. 2A-2C are schematic diagrams of reflected waves from a strong reflector. Fig. 2A is a reflected wave view of the strong reflector toward the array sensor. As shown in fig. 2A, point 112 is a strong reflection point, a, b, and c represent different centerlines, 110 represents the receiving transducer, 116a, 116b, and 116c represent isoamplitude curves of the centerlines at a, b, and c, respectively, and 114a, 114b, and 114c represent reflected waves from point 112 received at centerlines a, b, and c, respectively. Fig. 2B is a graph showing the equal amplitude curve of the reflected wave. As shown in fig. 2B, the iso-amplitude curves of the reflected waves from the strong reflection point 112 are shown from left to right at the center lines a, B, and c, respectively. As can be seen from fig. 2B, the closer the distance between the center line and the strong reflection point, the smaller the iso-wavefront slope of the echo signal, and the farther the distance between the center line and the strong reflection point, the larger the iso-wavefront slope of the echo signal. Fig. 2C is a graph showing the amplitude curve of the reflected wave. As shown in fig. 2C, the amplitude curves of the reflected waves from the strong reflection point 112 are shown from left to right at the center lines a, b, and C, respectively. Fig. 2C depicts a received signal amplitude diagram for different array elements of the receiving transducer 110 with centerlines at a, b, and C when the sound speed set value is not consistent with the actual value, or when there is an error in the receive delay calculation. As can be seen from fig. 2C, as the center line is closer to the strong reflection point 112, the amplitude of the received signals of different array elements of the receiving transducer 110 changes more gradually, and the intersection with 0 is less, the sign bit of the echo signal changes less frequently; when the center line is farther away from the strong reflection point 112, the amplitude of the received signals of different array elements of the receiving transducer 110 changes more severely, and the intersection with 0 is more, and the sign bit of the echo signal changes more frequently. When the center line changes from a to c, the stronger and stronger echo signal of the strong reflection point 112 is considered as a side lobe component, and such changes can be characterized by the degree of change of the amplitude of the received signals of different array elements of the receiving transducer 110, the intersection point with 0, the sign bit change of the echo signal, and the like.
The coherence factor in this embodiment can be determined, for example, according to the following formula:
Figure BDA0001941023410000051
wherein S isiAnd M represents the number of channels. CF represents a coherence factor, and the numerator of CF represents the inconsistency of the echo signal amplitudes of different channels.
The coherence factor in this embodiment may also be determined according to the following formula, for example:
Figure BDA0001941023410000061
when the amplitudes of the echo signals of different channels are more consistent, the Direct Component (DC) of the amplitude of the echo signal is closer to the total power value of the spectrum signal, and when the amplitudes of the echo signals of different channels are more severely changed, the proportion of the DC Component DC of the amplitude of the echo signal to the total power of the spectrum signal is smaller, and GCF represents a generalized coherence factor.
The coherence factor in this embodiment may also be determined according to the following formula, for example:
Figure BDA0001941023410000062
wherein, biSign bit representing echo signal of i-th channel, when signal amplitude is less than 0, bi-1, when the signal amplitude is greater than 0, bi1. SCF represents symbol coherence factor, M represents channel number, p represents exponential parameter, which is used for adjusting sensitivity of SCF, and the more same amplitude of echo signal of different channels, the closer SCF is to 1.
The coherence factor in this embodiment may also be determined according to the following formula, for example:
Figure BDA0001941023410000063
wherein, biSign bit representing echo signal of i-th channel, when signal amplitude is less than 0, bi-1, when the signal amplitude is greater than 0, bi1. GSCF represents the generalized symbol coherence factor, and M represents the number of channels. As can be seen from the above equation, the higher the proportion of the DC vicinity component in the symbol sequence of the echo signal to the total energy, the more uniform the echo signal amplitudes of the different channels, the closer the GSCF is to 1.
The coherence factor in this embodiment may also be determined according to the following formula, for example:
Figure BDA0001941023410000064
Figure BDA0001941023410000065
Figure BDA0001941023410000071
wherein, biThe sign bit of the echo signal of the ith channel is represented. As the intersection with 0 is more frequent, c is more 1, and a approaches 0, the smaller STF.
S102, determining the reliability value of each pixel point according to the coherence factor.
In this embodiment, after the coherence factor of each pixel point is determined, the reliability value of each pixel point is determined according to the coherence factor of each pixel point, and the reliability value may be used to measure the reliability of the beam forming result of the pixel point.
Generally, an ultrasonic system sets the sound velocity to a fixed preset value, however, in different types of human soft tissues, the sound velocity may have a certain difference, and even in the same type of soft tissues, the sound velocity may also have a difference due to the nonuniformity of tissues. Phase distortion caused by the non-uniformity of sound velocity is an important source of degradation of the quality of the ultrasound image.
It should be noted that, because the degree of the sound velocity non-uniformity is inversely related to the magnitude of the coherence factor, that is, the more uniform the sound velocity is, the larger the coherence factor is, the more non-uniform the sound velocity is, and the smaller the coherence factor is; the strength of the side lobe component is inversely related to the size of the coherence factor, namely the smaller the side lobe component is, the larger the coherence factor is, the larger the side lobe component is, and the smaller the coherence factor is. If the coherence factor of the pixel point is small, the beam synthesis result of the pixel point is unreliable, and the surrounding pixel points are more easily influenced in the post-processing stage of the ultrasonic image; if the coherence factor of the pixel point is large, the beam synthesis result of the pixel point is reliable, and the surrounding pixel points are not easily influenced in the post-processing stage of the ultrasonic image.
In summary, in this embodiment, the confidence value of each pixel point is positively correlated to the coherence factor of each pixel point. For example, the coherence factor of a pixel point can be used as the confidence value of the pixel point; the positive multiple of the coherence factor of the pixel point can also be used as the confidence value of the pixel point.
S103, processing the ultrasonic image according to the reliability value of each pixel point.
Since the reliability value of the pixel point can be used to measure the reliability of the beam forming result of the pixel point, in this embodiment, after the reliability value of each pixel point is determined, the ultrasound image can be processed according to the reliability value of each pixel point. For example, the ultrasound image may be divided into a high-reliability region and a low-reliability region according to the reliability values of the pixel points, wherein the high-reliability region may be regarded as a useful information image, the low-reliability region may be regarded as a background image, and the low-reliability region is removed in the post-processing of the ultrasound image, so that the final imaging effect is not affected by the low-reliability region. The embodiment does not limit the specific implementation manner of dividing the high-reliability area and the low-reliability area.
According to the ultrasonic image processing method provided by the embodiment, the coherence factor of each pixel point of the ultrasonic image is obtained, the reliability value of each pixel point is determined according to the coherence factor, and the ultrasonic image is processed according to the reliability value of each pixel point, so that the quality of the ultrasonic image is improved. According to the method and the device, the credibility value of the pixel point is determined through the coherence factor, the ultrasonic image is processed according to the credibility value, and the problems that in the prior art, wrong information is introduced due to a processing mode based on inhibition, the image ambiguity is increased, and a large number of small black holes are introduced are solved.
In some embodiments, one implementation manner of obtaining the coherence factor of each pixel point of the ultrasound image may be: acquiring an enhancement area and a sound absorption area of an ultrasonic image; and determining the coherence factor of each pixel point in the enhancement area and the sound absorption area.
In an ultrasound image, a region where an ultrasound echo is enhanced, for example, a region of the ultrasound image corresponding to some tissues, is an enhanced region of the ultrasound image. In the enhancement area of the ultrasonic image, the place with large coherence factor shows that the reliability is high; where the coherence factor is small, it indicates that the confidence is low.
Optionally, the coherence factor of each pixel point in the enhancement region may be determined according to the following formula:
Figure BDA0001941023410000081
where z denotes a lateral coordinate, l denotes a longitudinal coordinate, xn (z, l) denotes a pixel value at a position (z, l) in the nth frame image, N denotes the number of images, coh1(z, l) represents the coherence factor of the pixel (z, l) in the enhancement region.
In an ultrasound image, there are not only enhancement regions that enhance ultrasound echoes, but also regions that absorb ultrasound echoes, i.e., sound absorption regions, where the echoes are nearly 0, such as ultrasound image regions corresponding to cysts. If the coherence factor is lowest even in the trusted region of the sound-absorbing region according to the formula of the coherence factor in the prior art, the trusted region of the sound-absorbing region may be confused with the untrusted region of the enhancement region.
Optionally, the coherence factor of each pixel point in the sound absorption region may be determined according to the following formula:
Figure BDA0001941023410000082
wherein z represents a lateral coordinate, l represents a longitudinal coordinate, xn(z, l) represents a pixel value at a position (z, l) in the image of the nth frame, N represents the number of images, C is a constant, and C is equal to [0, 1 ]],coh2(z, l) represents the coherence factor of the pixel point (z, l) in the acoustic region.
Based on the above embodiment, the ultrasound image processing method provided in this embodiment determines the coherence factor of each pixel point in the enhancement region and the sound absorption region respectively by obtaining the enhancement region and the sound absorption region of the ultrasound image and according to the characteristics of the enhancement region and the sound absorption region, so as to improve the accuracy of the reliability measurement of the beam synthesis result of the pixel point based on the confidence value determined by the coherence factor, and further improve the quality of the processed ultrasound image.
In some embodiments, one implementation manner of determining the confidence value of each pixel point according to the coherence factor may be:
determining the binarization credibility value of each pixel point according to the following formula:
Figure BDA0001941023410000091
wherein coh (z, l) represents the coherence factor of the pixel point (z, l), T is a preset coherence factor threshold, conbinAnd (z, l) represents the binarization credibility value of the pixel point (z, l).
The present embodiment provides a specific implementation manner for determining the confidence value of each pixel point by using a threshold method, where a preset coherence factor threshold T may be determined according to coherence factors of an ultrasound image to be processed, for example, an average value of coherence factors corresponding to all pixel points of the ultrasound image may be used as T, or a median value of coherence factors corresponding to all pixel points of the ultrasound image may also be used as T, and the specific value of T is not limited in this embodiment.
In this embodiment, all pixel points with a confidence value of 1 may form a region as a confidence region, and the region is regarded as a useful information image; and taking all pixel point composition areas with the reliability values of 0 as untrusted areas, and regarding the untrusted areas as background images.
In some embodiments, one implementation manner of determining the confidence value of each pixel point according to the coherence factor may be: carrying out image segmentation on the ultrasonic image according to a maximum inter-class variance method and an active contour model method; and determining the reliability value of each pixel point according to the segmented ultrasonic image and the coherence factor.
In the embodiment, a general outline can be identified by adopting a maximum inter-class variance method, and then iterative operation of an active outline model method is performed for multiple times on the basis, so that a high-confidence area and a low-confidence area can be accurately segmented. The image segmentation in the present embodiment will be described in detail below.
The maximum inter-class variance method is an image thresholding method, proposed by Dajin Zhan in 1979. The image thresholding method is to separate an object from a background by a global threshold when the gray level distribution of the object and background pixels is obvious. The method is optimal under the condition of maximum inter-class variance, and is an automatic parameter-free unsupervised threshold segmentation method. The method is based on the one-dimensional gray level histogram, and is simple in calculation, so that the method is wide in application. The basic principle is as follows:
let the gray level (G ═ 1, 2.., L) of the image define the probability P that the ith gray level appears in the imageiDividing the gray value of the image into two classes C by using a threshold value t0And C1,C0Containing a grey level equal to or less than a threshold value, C1Involving grey levels greater than a threshold value, i.e. C0={1,2,...,t},C1T +1, t + 2. Get uγω (t) is C for the average gray scale of the whole image0The probability of occurrence, u (t), is the cumulative mean up to stage t. Using inter-class variance
Figure BDA0001941023410000101
As a function of the decision of the high or low classification performance,
Figure BDA0001941023410000102
can be determined according to the following formula:
Figure BDA0001941023410000103
the optimum threshold t*It can be determined that:
Figure BDA0001941023410000104
the geometric active contour model is one of the most successful segmentation models in recent years. Geometric active contour models are mainly divided into two categories: one is an active contour model based on image edge information; the other is an active contour model based on image region information. Wherein based on the image areaA typical representative of the Active contour model of the domain information is a Chan-Vese model, which is also called an Active Contexts Width Edge (ACWE) model. The Chan-Vese model does not depend on gradient information of an image, but realizes image segmentation according to the region segmentation principle, so that segmentation of an image with fuzzy edges or discontinuity can be realized, and certain noise resistance is realized. The principle is as follows: assuming that an image to be segmented is f (x), a closed curve C divides the image into a target area and a background area which are respectively represented by incide (C) and outtide (C), and C1And c2Represent the average gray levels of incide (C) and outtide (C), respectively. Defining an energy function E (C, C)1,c2):
E(C,c1,c2)=μL(C)+λ1incide(C)|f(x)-c1|2dx+λ2outcide(C)|f(x)-c2|2dx;
Wherein L (C) is the length term of the closed contour line C, mu, lambda1And λ2Are the weighting coefficients of the respective energy terms. Solving the problem of topological change of the curve by using a level set method, and expressing the movable contour line model C as a Lipschitz function, wherein the level set form is as follows:
Figure BDA0001941023410000105
the evolution equation evolves on all level set curves, and a minimum solution is searched globally, so that the purpose of segmentation is achieved.
In this embodiment, the contour is identified by using the maximum inter-class variance method, and then iterative operation of the Chan-Vese active contour model method is performed for a plurality of times, for example, the number of iterations can be set to 10, so that the segmentation accuracy can be improved.
In some embodiments, one implementation manner of processing the ultrasound image according to the confidence value of each pixel point may be: carrying out post-processing on a first area of the ultrasonic image, wherein the first area is an area formed by pixel points of which the reliability values are greater than or equal to a preset reliability threshold value in the ultrasonic image; and according to the post-processed first area, interpolating a second area of the ultrasonic image, wherein the second area is an area formed by pixel points of which the reliability values are smaller than a preset reliability threshold value in the ultrasonic image.
Fig. 3 is a schematic diagram of an architecture of an ultrasound system in the prior art. As shown in fig. 3, an arrow indicates a signal transmission direction, in the ultrasonic system, an ultrasonic reflection signal is received by a receiving chip, is subjected to analog amplification and sampling, and then enters a beam forming unit, and the beam forming unit is located at the front end of the system, and performs in-phase weighted superposition on echo data of each channel. Receive beamforming may include three processing parts, delay, apodization, and accumulation.
In general, in an acoustic field formed by transmission and reception of an ultrasonic signal, there are unwanted signal components such as side lobes, grating lobes, noise, and the like, in addition to a main lobe which determines the resolution of an image. The grating lobe is mainly related to the array element spacing, the emission wavelength and the emission deflection angle of the ultrasonic transducer, noise is mainly inhibited through delay accumulation, multichannel receiving and the like, and the common technical approach for inhibiting the side lobe is apodization, namely, an amplitude weighting technology is adopted for the emitted or received array elements, so that the weight of the central array element is maximum, and the weight of the edge array elements is minimum.
Some documents and patents propose a coherence factor, that is, a coefficient used in gain adjustment calculated based on the degree of uniformity in the element arrangement direction, multiplied by beam data after cumulative summation, to suppress side lobes and reduce phase jitter caused by sound speed unevenness. The more the phases of the plurality of received signals after the delay processing coincide with each other, the smaller the unnecessary signal component and the more dominant the main lobe component, and therefore the larger the value calculated as a coefficient. Conversely, the larger the phase variation between the plurality of received signals after the delay processing, the larger the unwanted signal component is considered to be, and the smaller the value calculated as the coefficient is.
Fig. 4 is a schematic structural diagram of an ultrasound system according to an embodiment of the present invention. As shown in fig. 4, on the basis of the ultrasound system shown in fig. 3, the ultrasound system provided in this embodiment performs data caching on a received ultrasound image, then performs image segmentation on the cached image, divides the image into an enhancement region and a sound absorption region, then performs IQ transform on the enhancement region and the sound absorption region, calculates a coherence factor according to a coherence factor calculation formula of the enhancement region and the sound absorption region, determines a confidence value according to the coherence factor, and inputs the determined confidence value into an image post-processing module.
In this embodiment, the coherence factor calculation 1 of the enhancement region is calculated by using the formula 1 in the above embodiment, and the coherence factor calculation 2 of the sound absorption region is calculated by using the formula 2 in the above embodiment.
In the existing ultrasound system, because the sum of the data volume received by each channel is very large, the data of each channel is often beam-synthesized in a Field-Programmable Gate Array (FPGA), so that after the data volume is reduced, the data is transmitted to a Personal Computer (PC) end through a limited transmission bandwidth. Conventional FPGA architectures do not allow for the extraction of data for each channel to compute coherence coefficients. It is therefore difficult to calculate the coherence factor from the data of each channel: the calculated amount is large, and FPGA is often difficult to realize; even if it can be implemented, significant modifications to the traditional FPGA architecture are required.
In the embodiment, the coherence factor is determined by adopting the formula 1 and the formula 2, the original FPGA architecture is not required to be changed, and the coherence factor is calculated by data uploaded to a PC (personal computer) end after each channel beam is synthesized, so that the algorithm is easier to realize.
On the basis of the above embodiments, the following describes, by way of specific examples, the improvement of the quality of an ultrasound image by the ultrasound image processing method provided by the embodiment of the present invention. The following is a description of the treatment effects of the enhancement region and the sound absorption region, respectively. Fig. 5A-5B are schematic views illustrating the effect of treating the enhancement region. Fig. 5A is an image obtained by processing an enhanced region of an ultrasound image according to the prior art, and fig. 5B is an image obtained by processing the enhanced region of the ultrasound image according to the ultrasound image processing method provided by the embodiment of the present invention. As can be seen from a comparison between fig. 5A and fig. 5B, the circular outline of the enhanced region is clearer in the image processed by the method provided by the embodiment of the present invention. Fig. 6A-6B are schematic diagrams illustrating the effect of treating the sound-absorbing region. Fig. 6A is an image obtained by processing a sound absorption area of an ultrasound image in the prior art, and fig. 6B is an image obtained by processing the sound absorption area of the ultrasound image by using the ultrasound image processing method provided by the embodiment of the present invention. As can be seen from a comparison between fig. 6A and fig. 6B, the black outline of the image in the sound absorption region is smaller after the image is processed by the method provided by the embodiment of the present invention, which illustrates that the interference of the low confidence region is filtered. In summary, the ultrasound image processing method provided in the embodiment of the present invention filters the low-reliability region of the ultrasound image based on the coherence factor, so as to reduce the influence of the low-reliability region and improve the quality of the ultrasound image.
Fig. 7 is a schematic view of an ultrasound image processing apparatus according to an embodiment of the present invention, which is only illustrated in fig. 7, and the present invention is not limited thereto. Fig. 7 is a schematic structural diagram of an ultrasound image processing apparatus according to an embodiment of the present invention. As shown in fig. 7, the ultrasound image processing apparatus 70 provided in this embodiment may include: an obtaining module 701, a determining module 702 and a processing module 703.
An obtaining module 701, configured to obtain a coherence factor of each pixel point of an ultrasound image;
a determining module 702, configured to determine a reliability value of each pixel according to the coherence factor;
the processing module 703 is configured to process the ultrasound image according to the confidence value of each pixel point.
The apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle and the technical effect are similar, which are not described herein again.
Optionally, the obtaining module 701 is configured to obtain a coherence factor of each pixel point of the ultrasound image, and specifically may include:
acquiring an enhancement area and a sound absorption area of an ultrasonic image;
and determining the coherence factor of each pixel point in the enhancement area and the sound absorption area.
Optionally, the coherence factor of each pixel point in the enhancement region may be determined according to the following formula:
Figure BDA0001941023410000131
wherein z represents a lateral coordinate, l represents a longitudinal coordinate, xn(z, l) represents a pixel value at a position (z, l) in the image of the nth frame, N represents the number of images, coh1(z, l) represents the coherence factor of the pixel (z, l) in the enhancement region.
Optionally, the coherence factor of each pixel point in the sound absorption region may be determined according to the following formula:
Figure BDA0001941023410000132
wherein z represents a lateral coordinate, l represents a longitudinal coordinate, xn(z, l) represents a pixel value at a position (z, l) in the image of the nth frame, N represents the number of images, C is a constant, and C is equal to [0, 1 ]],coh2(z, l) represents the coherence factor of the pixel point (z, l) in the acoustic region.
Optionally, the determining module 702 is configured to determine the reliability value of each pixel according to the coherence factor, and specifically may include:
determining the binarization credibility value of each pixel point according to the following formula:
Figure BDA0001941023410000133
wherein coh (z, l) represents the coherence factor of the pixel point (z, l), T is a preset coherence factor threshold, conbinAnd (z, l) represents the binarization credibility value of the pixel point (z, l).
Optionally, the determining module 702 is configured to determine the reliability value of each pixel according to the coherence factor, and specifically may include:
carrying out image segmentation on the ultrasonic image according to a maximum inter-class variance method and an active contour model method;
and determining the reliability value of each pixel point according to the segmented ultrasonic image and the coherence factor.
Optionally, the processing module 703 is configured to process the ultrasound image according to the reliability value of each pixel, and may specifically include:
carrying out post-processing on a first area of the ultrasonic image, wherein the first area is an area formed by pixel points of which the reliability values are greater than or equal to a preset reliability threshold value in the ultrasonic image;
and according to the post-processed first area, interpolating a second area of the ultrasonic image, wherein the second area is an area formed by pixel points of which the reliability values are smaller than a preset reliability threshold value in the ultrasonic image.
Fig. 8 is a schematic view showing an electronic device according to an embodiment of the present invention, which is only illustrated in fig. 8, and the embodiment of the present invention is not limited thereto. Fig. 8 is a schematic structural diagram of an embodiment of an electronic device provided in the present invention. As shown in fig. 8, the electronic device 80 provided in this embodiment may include: a memory 801, a processor 802, and a bus 803. Bus 803 is used to enable connections between various components.
The memory 801 stores a computer program, and when executed by the processor 802, the computer program may implement any of the technical solutions of the method embodiments described above.
Wherein the memory 801 and the processor 802 are electrically connected directly or indirectly to enable data transmission or interaction. For example, the elements may be electrically connected to each other via one or more communication buses or signal lines, such as bus 803. The memory 801 stores a computer program for implementing the ultrasound image processing method, which includes at least one software functional module that can be stored in the memory 801 in the form of software or firmware, and the processor 802 executes various functional applications and data processing by running the software program and module stored in the memory 801.
The Memory 801 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 801 is used for storing programs, and the processor 802 executes the programs after receiving execution instructions. Further, the software programs and modules within the above-described memory 801 may also include an operating system, which may include various software components and/or drivers for managing system tasks (e.g., memory management, storage device control, power management, etc.), and may communicate with various hardware or software components to provide an operating environment for other software components.
The processor 802 may be an integrated circuit chip having signal processing capabilities. The Processor 802 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and so on. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. It will be appreciated that the configuration of fig. 8 is merely illustrative and may include more or fewer components than shown in fig. 8 or have a different configuration than shown in fig. 8. The components shown in fig. 8 may be implemented in hardware and/or software.
The embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the ultrasound image processing method provided by any of the above method embodiments can be implemented. The computer-readable storage medium in this embodiment may be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, etc. that is integrated with one or more available media, and the available media may be magnetic media (e.g., floppy disks, hard disks, magnetic tapes), optical media (e.g., DVDs), or semiconductor media (e.g., SSDs), etc.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (6)

1. An ultrasound image processing method, comprising:
acquiring a coherence factor of each pixel point of an ultrasonic image;
determining the reliability value of each pixel point according to the coherence factor;
processing the ultrasonic image according to the credibility value of each pixel point;
the processing the ultrasonic image according to the reliability value of each pixel point comprises:
selecting a first area formed by pixel points of which the reliability values are greater than or equal to a preset reliability threshold value in the ultrasonic image, and taking the first area as a post-processed ultrasonic image;
the acquiring of the coherence factor of each pixel point of the ultrasonic image includes:
acquiring an enhancement area and a sound absorption area of the ultrasonic image;
determining a coherence factor of each pixel point in the enhancement area and the sound absorption area;
the determining the coherence factor of each pixel point in the enhancement region includes:
determining the coherence factor of each pixel point in the enhancement region according to the following formula:
Figure FDA0002944450770000011
the determining the coherence factor of each pixel point in the sound absorption region includes:
determining the coherence factor of each pixel point in the sound absorption area according to the following formula:
Figure FDA0002944450770000012
wherein z represents a lateral coordinate, l represents a longitudinal coordinate, xn(z, l) represents a pixel value at a position (z, l) in the image of the nth frame, N represents the number of images, coh1(z, l) represents the coherence factor of the pixel (z, l) in the enhancement region, C is a constant, and C is in the range of [0, 1 ]],coh2(z, l) represents the coherence factor of the pixel point (z, l) in the acoustic region.
2. The method of claim 1, wherein determining the confidence value of each pixel point according to the coherence factor comprises:
determining the binarization credibility value of each pixel point according to the following formula:
Figure FDA0002944450770000013
wherein coh (z, l) represents the coherence factor of the pixel point (z, l), T is a preset coherence factor threshold, conbinAnd (z, l) represents the binarization credibility value of the pixel point (z, l).
3. The method of claim 1, wherein selecting the first region of the ultrasound image, which is composed of pixels with confidence values greater than or equal to a predetermined confidence threshold, comprises:
and according to a maximum inter-class variance method and an active contour model method, carrying out image segmentation on the ultrasonic image to obtain a first region formed by pixel points of which the reliability values are greater than or equal to a preset reliability threshold value in the ultrasonic image.
4. An ultrasound image processing apparatus characterized by comprising:
the acquisition module is used for acquiring the coherence factor of each pixel point of the ultrasonic image;
the determining module is used for determining the credibility value of each pixel point according to the coherence factor;
the processing module is used for processing the ultrasonic image according to the credibility value of each pixel point;
the processing the ultrasonic image according to the reliability value of each pixel point comprises:
selecting a first area formed by pixel points of which the reliability values are greater than or equal to a preset reliability threshold value in the ultrasonic image, and taking the first area as a post-processed ultrasonic image;
the acquisition module is specifically configured to: acquiring an enhancement area and a sound absorption area of the ultrasonic image; determining a coherence factor of each pixel point in the enhancement area and the sound absorption area;
the determining the coherence factor of each pixel point in the enhancement region includes:
determining the coherence factor of each pixel point in the enhancement region according to the following formula:
Figure FDA0002944450770000021
the determining the coherence factor of each pixel point in the sound absorption region includes:
determining the coherence factor of each pixel point in the sound absorption area according to the following formula:
Figure FDA0002944450770000022
wherein z represents a lateral coordinate, l represents a longitudinal coordinate, xn(z, l) represents a pixel value at a position (z, l) in the image of the nth frame, N represents the number of images, coh1(z, l) represents the coherence factor of the pixel (z, l) in the enhancement region, C is a constant, and C is in the range of [0, 1 ]],coh2(z, l) represents the coherence factor of the pixel point (z, l) in the acoustic region.
5. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the ultrasound image processing method of any of claims 1-3.
6. A computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the ultrasound image processing method according to any one of claims 1 to 3.
CN201910021838.1A 2019-01-10 2019-01-10 Ultrasonic image processing method, device and equipment Active CN109754407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910021838.1A CN109754407B (en) 2019-01-10 2019-01-10 Ultrasonic image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910021838.1A CN109754407B (en) 2019-01-10 2019-01-10 Ultrasonic image processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN109754407A CN109754407A (en) 2019-05-14
CN109754407B true CN109754407B (en) 2021-06-01

Family

ID=66405361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910021838.1A Active CN109754407B (en) 2019-01-10 2019-01-10 Ultrasonic image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN109754407B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110840484B (en) * 2019-11-27 2022-11-11 深圳开立生物医疗科技股份有限公司 Ultrasonic imaging method and device for adaptively matching optimal sound velocity and ultrasonic equipment
CN111466949B (en) * 2020-04-13 2023-06-27 剑桥大学南京科技创新中心有限公司 MMSE beam forming device, MMSE beam forming method and computer readable storage medium
CN111882515B (en) * 2020-09-28 2020-12-29 深圳华声医疗技术股份有限公司 Ultrasonic signal processing method, ultrasonic signal processing apparatus, and storage medium
CN113345041B (en) * 2021-05-20 2024-03-15 河南工业大学 Ultrasonic coherence factor determination method, ultrasonic image reconstruction method and electronic equipment
CN117503203B (en) * 2024-01-03 2024-03-22 之江实验室 Phase aberration correction method and system for ultrasonic ring array imaging

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5910115A (en) * 1997-09-22 1999-06-08 General Electric Company Method and apparatus for coherence filtering of ultrasound images
US20060173313A1 (en) * 2005-01-27 2006-08-03 Siemens Medical Solutions Usa, Inc. Coherence factor adaptive ultrasound imaging

Also Published As

Publication number Publication date
CN109754407A (en) 2019-05-14

Similar Documents

Publication Publication Date Title
CN109754407B (en) Ultrasonic image processing method, device and equipment
CN111427021B (en) Dynamic threshold value calculation method and system for vehicle-mounted millimeter wave radar signal peak value detection
JPH0613027B2 (en) Ultrasonic medium characteristic value measuring device
US9081097B2 (en) Component frame enhancement for spatial compounding in ultrasound imaging
US20070167802A1 (en) Accurate time delay estimation method and system for use in ultrasound imaging
CN101238992A (en) Ultrasonic imaging system self-adaption beam former based on correlation analysis
CN111856474B (en) Subarray-based space-time domain conditional coherence coefficient ultrasonic imaging method
CN108354627B (en) Ultrasonic beam forming method for improving frame frequency
US10908269B2 (en) Clutter suppression in ultrasonic imaging systems
JP5069022B2 (en) Method and system for accurate time delay estimation for use in ultrasound imaging
Zhao et al. Two self-adaptive methods of improving multibeam backscatter image quality by removing angular response effect
CN110840484B (en) Ultrasonic imaging method and device for adaptively matching optimal sound velocity and ultrasonic equipment
CN116908853B (en) High coherence point selection method, device and equipment
Thon et al. Detection of point scatterers in medical ultrasound
CN110897655A (en) Transcranial ultrasonic imaging method and device and computer readable storage medium
CN107169978B (en) Ultrasonic image edge detection method and system
Li et al. Ultrasound speckle reduction based on image segmentation and diffused region growing
CN111273246B (en) Method and system for automatically judging number of ship targets based on broadband radar HRRP
Kwon et al. Estimation and suppression of side lobes in medical ultrasound imaging systems
CN112230198B (en) Laser radar echo waveform denoising method based on gradient window width weight correction
CN113625286A (en) Strong robustness truncation coherence coefficient ultrasonic beam forming method based on coherence features
Sung et al. Dual-/tri-apodization techniques for high frequency ultrasound imaging: a simulation study
CN113647978B (en) High-robustness symbol coherence coefficient ultrasonic imaging method with truncation factor
CN112819733B (en) Directional bilateral image filtering method and device
Salari et al. Adaptive beamforming with automatic diagonal loading in medical ultrasound imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant