CN117338338A - Ultrasonic imaging methods, apparatus, devices, systems, media, and products - Google Patents

Ultrasonic imaging methods, apparatus, devices, systems, media, and products Download PDF

Info

Publication number
CN117338338A
CN117338338A CN202311579945.9A CN202311579945A CN117338338A CN 117338338 A CN117338338 A CN 117338338A CN 202311579945 A CN202311579945 A CN 202311579945A CN 117338338 A CN117338338 A CN 117338338A
Authority
CN
China
Prior art keywords
data
sound ray
target
synthetic
ray data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311579945.9A
Other languages
Chinese (zh)
Inventor
李晓珍
朱超
韩宇
凌燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Feiyinuo Technology Co ltd
Original Assignee
Feiyinuo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Feiyinuo Technology Co ltd filed Critical Feiyinuo Technology Co ltd
Priority to CN202311579945.9A priority Critical patent/CN117338338A/en
Publication of CN117338338A publication Critical patent/CN117338338A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4444Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to the probe
    • A61B8/445Details of catheter construction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5269Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts
    • A61B8/5276Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts due to motion

Abstract

The present application relates to an ultrasound imaging method, apparatus, device, system, medium and article. The method comprises the following steps: acquiring a plurality of scanning sound line data, wherein the plurality of scanning sound line data are acquired by rotating a scanning unit on the inner side of a target object through a single-element ultrasonic probe; obtaining target sound ray data corresponding to each synthetic line in the plurality of synthetic lines according to the plurality of scanned sound ray data, wherein the angle difference value between any two adjacent synthetic lines in the plurality of synthetic lines is the same; and performing ultrasonic image generation processing on the plurality of target sound ray data to obtain a target ultrasonic image corresponding to one circle of the inner side of the target object. The method can avoid the problem of image shake generated by the obtained target ultrasonic image.

Description

Ultrasonic imaging methods, apparatus, devices, systems, media, and products
Technical Field
The present application relates to the field of ultrasound imaging technology, and in particular, to an ultrasound imaging method, apparatus, device, system, medium, and product.
Background
Intravascular ultrasound imaging is a medical technique that performs imaging by using ultrasound inside a human blood vessel. The ultrasound probe is typically introduced into a blood vessel in the body by mounting the ultrasound probe on a catheter or guidewire, and then generating high quality images of the interior of the blood vessel in real time by transmission and reception of ultrasound waves.
In the related art, an ultrasonic imaging method applied to blood vessels is to acquire a plurality of scanning sound ray data through one-base-element ultrasonic probe rotating in the blood vessels for one circle, and to perform ultrasonic image generation processing according to the plurality of scanning sound ray data, so as to acquire an ultrasonic image of one circle inside the blood vessels.
However, the ultrasonic image obtained by the above method is prone to have the problem of image shake.
Disclosure of Invention
Based on this, it is necessary to provide an ultrasound imaging method, apparatus, device, system, medium and product capable of avoiding image shake in view of the above technical problems.
In a first aspect, the present application provides an ultrasound imaging method comprising:
acquiring a plurality of scanning sound line data, wherein the plurality of scanning sound line data are acquired by rotating a scanning unit on the inner side of a target object through a single-element ultrasonic probe;
obtaining target sound ray data corresponding to each synthetic line in the plurality of synthetic lines according to the plurality of scanned sound ray data, wherein the angle difference value between any two adjacent synthetic lines in the plurality of synthetic lines is the same;
and performing ultrasonic image generation processing on the plurality of target sound ray data to obtain a target ultrasonic image corresponding to one circle of the inner side of the target object.
In one embodiment, obtaining target sound ray data corresponding to each of a plurality of synthetic lines according to the plurality of scan line sound data includes:
Acquiring aperture angle thresholds corresponding to a plurality of imaging points on a target synthetic line, wherein the target synthetic line is any one synthetic line of the plurality of synthetic lines;
for each imaging point, determining a first number of first sound ray data corresponding to the imaging point according to an aperture angle threshold, wherein the first sound ray data are the scanning sound ray data with the difference value of the scanning angle and the synthetic angle of the target synthetic line in the plurality of scanning sound ray data smaller than the angle threshold;
if the first number corresponding to the imaging points is not 1, acquiring delay data between the imaging points and the first number of first sound ray data respectively for each imaging point on the target synthetic line;
determining synthetic data of imaging points according to the first number of first sound ray data and corresponding delay data;
and obtaining target sound ray data corresponding to the target synthetic line according to the synthetic data of each imaging point.
In one embodiment, obtaining the angle threshold value corresponding to each of the plurality of imaging points on the target synthetic line includes:
for each imaging point, acquiring a depth value corresponding to the imaging point;
and determining an aperture angle threshold corresponding to the imaging point according to the depth value.
In one embodiment, acquiring delay data between the imaging points and a corresponding first number of first sound ray data, respectively, includes:
Aiming at each first sound ray data, acquiring a scanning distance between a scanning focus corresponding to the first sound ray data and an imaging point, and acquiring a synthetic distance between a synthetic focus of a target synthetic line and the imaging point;
obtaining a delay value between an imaging point and first sound ray data according to the scanning distance and the synthesized distance;
obtaining a delay direction between the imaging point and the first sound ray data according to the depth value of the imaging point and the depth value of the synthesized focus;
and obtaining delay data between the imaging point and the first sound ray data according to the delay value and the delay direction.
In one embodiment, obtaining target sound ray data corresponding to a target synthesis line according to synthesis data of each imaging point includes:
for each imaging point, performing frequency domain conversion processing on a first number of first sound ray data corresponding to the imaging point to obtain a first number of frequency domain data corresponding to the imaging point;
filtering the first quantity of frequency domain data corresponding to the imaging points to obtain first quantity of filtered data corresponding to the imaging points;
obtaining a weight coefficient corresponding to the imaging point according to the first number of frequency domain data and the first number of filtered data;
Obtaining optimized data corresponding to the imaging points according to the synthesized data corresponding to the imaging points and the weight coefficients corresponding to the imaging points;
and obtaining target sound ray data corresponding to the target synthetic line according to the optimized data corresponding to each imaging point.
In one embodiment, filtering the first number of frequency domain data corresponding to the imaging point to obtain a first number of filtered data corresponding to the imaging point includes:
obtaining a cut-off frequency corresponding to the imaging point according to the depth value corresponding to the imaging point;
and carrying out band-pass filtering processing on the first quantity of frequency domain data corresponding to the imaging points according to the cut-off frequency to obtain first quantity of filtered data.
In one embodiment, the method further comprises:
and if the first number corresponding to the plurality of imaging points is all 1, determining the first sound ray data as target sound ray data corresponding to the target synthetic line.
In a second aspect, the present application also provides an ultrasound imaging apparatus comprising:
the scanning data acquisition module is used for acquiring a plurality of scanning sound line data, wherein the plurality of scanning sound line data are acquired by rotating a scanning unit on the inner side of a target object through a single-element ultrasonic probe;
the synthetic data determining module is used for obtaining target sound ray data corresponding to each synthetic line in the plurality of synthetic lines according to the plurality of scanned sound ray data, and the angle difference value between any two adjacent synthetic lines in the plurality of synthetic lines is the same;
And the ultrasonic image determining module is used for carrying out ultrasonic image generation processing on the plurality of target sound ray data to obtain a target ultrasonic image corresponding to one circle of the inner side of the target object.
In a third aspect, the present application also provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method according to the first aspect described above when the computer program is executed by the processor.
In a fourth aspect, the present application also provides an ultrasound system comprising a computer device as described in the third aspect above and a single-element ultrasound probe connected to the computer device.
In a fifth aspect, the present application also provides a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to the first aspect described above.
In a sixth aspect, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method according to the first aspect described above.
The ultrasonic imaging method, the device, the equipment, the system, the medium and the product acquire a plurality of scanning sound line data acquired by rotating a single-base-element ultrasonic probe on the inner side of a target object for scanning for one circle; obtaining target sound ray data corresponding to each synthetic line in the plurality of synthetic lines according to the plurality of scanned sound ray data, wherein the angle difference value between any two adjacent synthetic lines is the same; and carrying out ultrasonic image generation processing by uniformly arranging target sound ray data corresponding to the synthetic lines for one circle, so as to obtain a target ultrasonic image corresponding to one circle inside the target object. Thus, the problem of image shake of an ultrasonic image generated by directly utilizing the scanning sound ray data caused by non-uniform rotation of the driving motor in the prior art is avoided. In the ultrasonic imaging technology provided by the embodiment, the synthetic lines with uniform wiring angles are preset, the target sound ray data corresponding to each synthetic line is determined according to the plurality of scanned sound ray data, and then the target ultrasonic image generated by the plurality of target sound ray data with linear angle change can avoid the problem of image shake.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for a person having ordinary skill in the art.
FIG. 1 is a diagram of an application environment for an ultrasound imaging method in one embodiment;
FIG. 2 is a flow diagram of a method of ultrasound imaging in one embodiment;
FIG. 3 is a flowchart illustrating a step of obtaining target sound ray data in one embodiment;
FIG. 4 is a flow chart of the step of obtaining delay data in one embodiment;
FIG. 5 is an exemplary schematic diagram of the routing of scan lines and composite lines in one embodiment;
FIG. 6 is a flowchart illustrating a step of obtaining target sound ray data according to another embodiment;
FIG. 7 is a flow chart of an ultrasound imaging method in another embodiment;
FIG. 8 is a block diagram of an ultrasound imaging device in one embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The ultrasonic imaging method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. The terminal 102 is connected with the single-element ultrasonic probe 104 to acquire a plurality of scanning sound line data acquired by the single-element ultrasonic probe 104 rotating and scanning for one circle on the inner side of a target object, and obtain target sound line data corresponding to each synthetic line in a plurality of synthetic lines according to the plurality of scanning sound line data, wherein the angle difference value between any two adjacent synthetic lines in the plurality of synthetic lines is the same; and performing ultrasonic image generation processing on the plurality of target sound ray data to obtain a target ultrasonic image corresponding to one circle of the inner side of the target object. The single-base-element ultrasonic probe 104 has a 360-degree circular scanning function and is used for transmitting ultrasonic signals on the inner side of a target object by rotating for one circle, and collecting ultrasonic signals reflected on the inner side of the target object to obtain a plurality of scanning sound line data.
In an exemplary embodiment, as shown in fig. 2, an ultrasound imaging method is provided, and the method is applied to the terminal 102 in fig. 1 for illustration, and includes the following steps 202 to 206. Wherein:
step 202, acquiring a plurality of scan sound line data.
The plurality of scanning sound line data are acquired by rotating a scanning sound line on the inner side of the target object through the single-element ultrasonic probe. The target object may be a blood vessel, an anorectal or esophageal, or other narrow tubular objects requiring medial ultrasound image detection. Wherein, the plurality of scanning sound line data constitute one frame of ultrasonic signal data for one circle inside the target object.
Wherein each scanning sound line data comprises a plurality of sampling point data and a scanning angle alpha corresponding to the scanning line 1 Wherein, depth value Depth (n) =n/Fs/2*c corresponding to nth sampling point data in each scanning sound line data, wherein Fs is scanning sampling rate corresponding to the single-base ultrasonic probe, c is ultrasonic wave propagation speed, and the value is generally 1540m/s.
The scanning sound line data is an actual sound line obtained by driving a single-base-element ultrasonic probe to rotate for one circle by a motor; since there is inevitably uneven rotation of the motor, there are a plurality of scan angles α 1 The angle difference between any two adjacent scanning lines is not completely linearly changed, and the inconsistency is unavoidable.
And 204, obtaining target sound ray data corresponding to each synthetic line in the plurality of synthetic lines according to the plurality of scanned sound ray data, wherein the angle difference value between any two adjacent synthetic lines in the plurality of synthetic lines is the same.
Wherein the synthetic line is virtual sound line preset in the embodiment, and is routedAngle alpha 2 The angle step is 2 pi/L 2 Wherein L is 2 Is the number of synthetic threads; the first synthetic line of the plurality of synthetic lines corresponds to a synthetic angle alpha 2 (1) A corresponding synthetic angle alpha of 0 for the last synthetic line 2 (L 2 ) Is 2 pi-2 pi/L 2 . The angle difference between any two adjacent synthetic lines is delta theta=2 pi/L 2
The target sound ray data corresponding to each synthetic line can be obtained by synthesizing the scanning sound ray data corresponding to the scanning line of each synthetic line within a preset distance range. The target sound ray data corresponding to all the synthetic lines constitute one frame of synthetic data for one circle inside the target object.
The target sound ray data corresponding to each synthetic line comprises a plurality of imaging point data and a synthetic angle alpha corresponding to the synthetic line 2 The depth value corresponding to each imaging point is also shown in formula 1, that is, the depth value of the corresponding imaging point on the synthesized line is consistent with the depth value of the corresponding sampling point on the scanning line.
And 206, performing ultrasonic image generation processing on the plurality of target sound ray data to obtain a target ultrasonic image corresponding to one circle of the inner side of the target object.
The process of generating the ultrasonic image of the plurality of target sound ray data may include envelope taking, log (logarithm) compressing, gray scale and map mapping of a frame of ultrasonic signal data composed of the plurality of target sound ray data, so as to obtain a target ultrasonic image corresponding to one circle of the inner side of the target object.
In one possible implementation manner, in the process of performing ultrasonic image generation processing on a plurality of target sound ray data, band-pass filtering processing is performed on a frame of ultrasonic signal data formed by the plurality of target sound ray data to remove out-of-band signals; and then converting the ultrasonic signals after filtering into ultrasonic images to obtain a corresponding target ultrasonic image of one circle of the inner side of the target object. For example, the cut-off frequency of the filtering process in the present embodiment may be set to a demodulation frequency corresponding to the scan line data. The band-pass filtering is performed before the ultrasonic signal is converted into the ultrasonic image, so that an effective bandwidth signal can be reserved, clutter and noise signals are suppressed, and the obtained target ultrasonic image can be clearer.
The ultrasonic imaging method provided by the embodiment of the invention obtains a plurality of scanning sound line data acquired by rotating a single-base-element ultrasonic probe on the inner side of a target object for scanning a circle; obtaining target sound ray data corresponding to each synthetic line in the plurality of synthetic lines according to the plurality of scanned sound ray data, wherein the angle difference value between any two adjacent synthetic lines is the same; and carrying out ultrasonic image generation processing by uniformly arranging target sound ray data corresponding to the synthetic lines for one circle, so as to obtain a target ultrasonic image corresponding to one circle inside the target object. Thus, the problem of image shake of an ultrasonic image generated by directly utilizing the scanning sound ray data caused by non-uniform rotation of the driving motor in the prior art is avoided. In the ultrasonic imaging technology provided by the embodiment, the synthetic lines with uniform wiring angles are preset, the target sound ray data corresponding to each synthetic line is determined according to the plurality of scanned sound ray data, and then the target ultrasonic image generated by the plurality of target sound ray data with linear angle change can avoid the problem of image shake.
Further, in this embodiment, the wiring angles of the plurality of synthetic lines are uniformly changed, so that the angles on the target ultrasonic image based on the plurality of target sound ray data are uniformly changed, and the adaptive position matching of the target ultrasonic images of different frames is synchronously realized.
In an exemplary embodiment, referring to fig. 3 based on the embodiment shown in fig. 2, the present embodiment provides a process of obtaining target sound ray data corresponding to each of a plurality of synthetic lines according to a plurality of scanned sound ray data. As shown in fig. 3, the process includes steps 302 to 310, wherein:
step 302, obtaining aperture angle thresholds corresponding to a plurality of imaging points on a target synthetic line.
Wherein the target composite line is any one of a plurality of composite lines.
In one possible implementation, corresponding reasonable aperture angle thresholds, denoted Δα, are set for imaging points of different depths. In this embodiment, the process of obtaining the angle thresholds corresponding to the plurality of imaging points on the target synthetic line includes: and acquiring a depth value corresponding to each imaging point, and determining an angle threshold corresponding to the imaging point according to the depth value. Illustratively, query processing is performed from a preset angle mapping relation according to the depth value, so as to obtain an angle threshold corresponding to the imaging point, wherein the angle mapping relation comprises: the angle threshold value corresponding to the imaging point with the larger depth value is larger than or equal to the angle threshold value corresponding to the imaging point with the smaller depth value; in this example, an example of an angle mapping relationship between a depth value and an aperture angle threshold is given: depth values are within 1mm, Δαx=2°; the depth is in the range of 1mm to 3mm, Δαback=6°; the depth value is in the range of 3mm to 6mm, Δαback=10°; depth is above 6mm, Δαn=12°.
In another possible implementation manner, an aperture angle threshold value corresponding to the sequence identifier may be preset according to the sequence identifier corresponding to each imaging point.
Step 304, for each imaging point, determining a first number of first sound ray data corresponding to the imaging point according to the aperture angle threshold.
The first sound ray data are the scanning sound ray data, wherein the difference value between the scanning angle and the combined angle of the target synthetic line in the plurality of scanning sound ray data is smaller than an angle threshold value. For convenience of description, the number of first sound ray data, that is, the first number is denoted as S in the embodiment of the present application, S is 1 or more.
Exemplary, S (n) =length ({ α) 21 (j)<ΔαΔn (n) }, where S (n) is the first number corresponding to the nth imaging point on the target synthetic line, α 1 (j) Is the scanning angle corresponding to the j scanning line, and the value range of j is 1 to L 1 ;α 2 Is the resultant angle corresponding to the target synthetic line, Δαx (n) is the aperture angle threshold corresponding to the nth imaging point on the target synthetic line.
In step 306, if the first number of imaging points is not all 1, delay data between the imaging points and the first number of first sound ray data is obtained for each imaging point on the target synthetic line.
Under the condition that the first number corresponding to the imaging points is not 1, a plurality of overlapped scanning lines can participate in the synthesis of the target sound ray data.
In one possible embodiment, the delay data between the imaging point and each first sound ray data may be determined from a distance between the imaging point and a scanning focal point of each first sound ray data. The delay data comprises a delay value and a delay direction. Referring to fig. 4, in this possible embodiment, the process of acquiring delay data between the imaging points for the first number of first sound ray data respectively includes steps 402 to 408, where:
step 402, for each first sound ray data, acquiring a scanning distance between a scanning focus and an imaging point corresponding to the first sound ray data, and acquiring a synthetic distance between a synthetic focus and the imaging point of a target synthetic line.
For example, please refer to fig. 5, the scan line i corresponds to the first sound line data, the synthesized line j corresponds to the target synthesized line, the point P is the imaging point, the point o is the starting point of the scan line j, the s point is the starting point of the target synthesized line, and the point f 1 To scan the virtual source of line i, i.e., the scan focus in step 402, point f 2 Is the virtual source of the composite line j, i.e., the composite focus in step 302; length of line of 1 Sum line length sf 2 The natural focusing depth of the unit ultrasonic probe is equal to the natural focusing depth of the unit ultrasonic probe, and is determined by the sound field of the adopted single-base ultrasonic probe, and the natural focusing depth is a value which can be predetermined. Exemplary, line length pf 1 For scanning distance, for imaging point P and virtual focus f 1 A distance therebetween; line length pf 2 For the synthetic distance, the distance can be determined by the line length pf 1 And the scanning angle of the scanning line i and the synthetic angle of the synthetic line j are calculated. Wherein the black squares on the surface of the circles in fig. 5 represent the position of a single-element ultrasound probe at a certain moment in rotation.
And step 404, obtaining a delay value between the imaging point and the first sound ray data according to the scanning distance and the synthesized distance.
Exemplary delay values between the imaging point and the first sound ray dataWherein, the method comprises the steps of, wherein,for scanning distance, add>For the resultant distance, c is the ultrasonic propagation velocity.
And step 406, obtaining a delay direction between the imaging point and the first sound ray data according to the depth value of the imaging point and the depth value of the synthesized focus.
The delay direction is the positive and negative conditions of delay data. If the depth value of the imaging point is smaller than the depth value of the synthetic focus, namely the imaging point is in front of the synthetic focus (closer to the ultrasonic probe), the delay direction is negative; if the depth value of the imaging point is larger than the depth value of the synthetic focus, i.e. the imaging point is behind the synthetic focus, the delay direction is positive.
Step 408, obtaining delay data between the imaging point and the first sound ray data according to the delay value and the delay direction. That is, the magnitude of the delay value depends on the distance from the imaging point to the scanning focus and the synthetic focus, and the delay direction depends on whether the depth value of the imaging point is greater than the depth value corresponding to the synthetic focus.
Step 308, determining composite data corresponding to the imaging point according to the first number of first sound ray data and the corresponding delay data.
For example, referring to fig. 5, if the imaging point P on the composite line j corresponds to only a plurality of first sound ray data, one of the first sound ray data is the scan line i, wherein the delay value between the imaging point P and the scan line i isIf the delay direction is positive, acquiring a depth value of +.>Data at the siteSimilarly, according to corresponding delay data, data at corresponding depth in other first sound ray data are obtained as candidate data for synthesizing the synthesized data of the imaging point P; and adding the same phase data of each candidate data to obtain the synthetic data of the imaging point P.
And step 310, obtaining target sound ray data corresponding to the target synthetic line according to the synthetic data of each imaging point.
The synthetic data of each imaging point is used for obtaining target sound ray data, full focusing on all depths can be achieved, the transverse resolution of a target ultrasonic image obtained based on the target sound ray data does not change along with the depth, the transverse resolution of the whole field of the target ultrasonic image is improved, and meanwhile the contrast ratio and the signal to noise ratio of the target ultrasonic image obtained according to the target sound ray data can be improved.
In one possible implementation, as shown in fig. 3, the process of obtaining the target sound ray data corresponding to each of the plurality of synthetic lines according to the plurality of scanned sound ray data further includes steps 314 to 316, where:
in step 312, if the first number of the imaging points is all 1, the first sound ray data is determined as the target sound ray data corresponding to the target synthetic line.
Under the condition that the first number corresponding to the imaging points is 1, the first sound ray data corresponding to each imaging point on the target synthetic line are the same, namely, the difference value between the scanning angle in the plurality of scanning sound ray data and the synthetic angle of the target synthetic line is the smallest; in the present embodiment, the scanned sound ray data is directly used as target sound ray data corresponding to the target synthetic line, and efficiency and accuracy of obtaining the target sound ray data are both considered.
In another possible embodiment, in the case that the first number corresponding to the plurality of imaging points is all 1, the delay with the first sound ray data may also be calculated from imaging point to imaging point.
In an exemplary embodiment, please refer to fig. 6, based on the embodiment shown in fig. 3, the present embodiment provides a process how to obtain target sound ray data corresponding to a target synthesis line according to the synthesized data of each pixel point. As shown in fig. 6, the process includes steps 602 to 610, wherein:
step 602, for each imaging point, performing frequency domain conversion processing on a first number of first sound ray data corresponding to the imaging point, to obtain a first number of frequency domain data corresponding to the imaging point.
Step 604, performing filtering processing on the first number of frequency domain data corresponding to the imaging point to obtain a first number of filtered data corresponding to the imaging point.
In one possible implementation manner, the filtering processing is performed on the first number of frequency domain data corresponding to the imaging point, and the process of obtaining the first number of filtered data corresponding to the imaging point includes: obtaining a cut-off frequency corresponding to the imaging point according to the depth value corresponding to the imaging point; and carrying out band-pass filtering processing on the first quantity of frequency domain data corresponding to the imaging points according to the cut-off frequency to obtain the first quantity of filtered data.
Illustratively, query processing is performed from a preset frequency domain mapping relation according to the depth value of the imaging point, so as to obtain a cut-off frequency corresponding to the imaging point; wherein, the frequency domain mapping relation comprises: the cut-off frequency corresponding to the imaging point with the larger depth value is smaller than or equal to the cut-off frequency corresponding to the imaging point with the smaller depth value.
Step 606, obtaining a weight coefficient corresponding to the imaging point according to the first number of frequency domain data and the first number of filtered data.
Exemplary, weighting coefficients corresponding to imaging pointsWherein S represents a first number,for the kth first sound ray data, < >>Representing the kth filtered data.
Step 608, obtaining optimized data corresponding to the imaging points according to the synthesized data corresponding to the imaging points and the weight coefficients corresponding to the imaging points.
And 610, obtaining target sound ray data corresponding to the target synthetic line according to the optimized data corresponding to each imaging point.
In the method provided by the embodiment, the self-adaption of the signal self-characteristics and the distance of the target sound ray data is realized by calculating the frequency domain coherence of each imaging point, so that the signal-to-noise ratio, the contrast and the resolution of the obtained target ultrasonic image are further improved. Meanwhile, because the weighting coefficients adopt coherent computation on a frequency domain, the weighting coefficients between adjacent imaging points are not easy to mutate, and spots are not easy to generate in the obtained target ultrasonic image.
In an exemplary embodiment, please refer to fig. 7, an ultrasound imaging method is provided, which is illustrated by using the method applied to the terminal 102 in fig. 1, and includes the following steps 701 to 710, wherein:
step 701, acquiring a plurality of scanning sound line data; the plurality of scanning sound line data are acquired by rotating a scanning sound line on the inner side of the target object through the single-element ultrasonic probe.
Step 702, obtaining aperture angle thresholds corresponding to a plurality of imaging points on a target synthetic line respectively; wherein the target composite line is any one of a plurality of composite lines.
Optionally, the process of obtaining aperture angle thresholds corresponding to the plurality of imaging points on the target synthetic line includes: for each imaging point, acquiring a depth value corresponding to the imaging point; and determining an aperture angle threshold corresponding to the imaging point according to the depth value.
Step 703, determining, for each imaging point, a first number of first sound ray data corresponding to the imaging point according to an aperture angle threshold, where the first sound ray data is one of the plurality of scanned sound ray data, and a difference value between a scanned angle and a combined angle of a target synthetic line is smaller than the angle threshold;
in step 704, it is detected whether the first number corresponding to the plurality of imaging points on the target synthetic line is all 1.
Step 705, if the first number of imaging points is not all 1, acquiring delay data between the imaging points and the first number of first sound ray data for each imaging point on the target synthetic line.
Optionally, the process of acquiring delay data between the imaging point and the first number of first sound ray data respectively includes: aiming at each first sound ray data, acquiring a scanning distance between a scanning focus corresponding to the first sound ray data and an imaging point, and acquiring a synthetic distance between a synthetic focus of a target synthetic line and the imaging point; obtaining a delay value between an imaging point and first sound ray data according to the scanning distance and the synthesized distance; obtaining a delay direction between the imaging point and the first sound ray data according to the depth value of the imaging point and the depth value of the synthesized focus; and obtaining delay data between the imaging point and the first sound ray data according to the delay value and the delay direction.
Step 706, determining composite data of the imaging point according to the first number of first sound ray data and the corresponding delay data.
Step 707, obtaining target sound ray data corresponding to the target synthetic line according to the synthetic data of each imaging point.
Optionally, the process of obtaining the target sound ray data corresponding to the target synthetic line according to the synthetic data of each imaging point includes: for each imaging point, performing frequency domain conversion processing on a first number of first sound ray data corresponding to the imaging point to obtain a first number of frequency domain data corresponding to the imaging point; filtering the first quantity of frequency domain data corresponding to the imaging points to obtain first quantity of filtered data corresponding to the imaging points; obtaining a weight coefficient corresponding to the imaging point according to the first number of frequency domain data and the first number of filtered data; obtaining optimized data corresponding to the imaging points according to the synthesized data corresponding to the imaging points and the weight coefficients corresponding to the imaging points; and obtaining target sound ray data corresponding to the target synthetic line according to the optimized data corresponding to each imaging point.
Optionally, the filtering processing is performed on the first number of frequency domain data corresponding to the imaging point, and the process of obtaining the first number of filtered data corresponding to the imaging point includes: obtaining a cut-off frequency corresponding to the imaging point according to the depth value corresponding to the imaging point; and carrying out band-pass filtering processing on the first quantity of frequency domain data corresponding to the imaging points according to the cut-off frequency to obtain first quantity of filtered data.
In step 708, if the first number of imaging points is all 1, the first sound ray data is determined as the target sound ray data corresponding to the target synthetic line.
Step 709, performing ultrasonic image generation processing on the plurality of target sound ray data to obtain a target ultrasonic image corresponding to one circle of the inner side of the target object.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide an ultrasound imaging apparatus for implementing the above-mentioned ultrasound imaging method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitations in one or more embodiments of the ultrasound imaging device provided below may be referred to above for limitations of the ultrasound imaging method, and will not be repeated here.
In one exemplary embodiment, as shown in fig. 8, there is provided an ultrasonic imaging apparatus comprising: a scan data acquisition module 802, a composite data determination module 804, and an ultrasound image determination module 806, wherein:
the scan data acquisition module 802 is configured to acquire a plurality of scan line data, where the plurality of scan line data are acquired by rotating a single-primitive ultrasonic probe inside a target object for scanning a circle.
The composite data determining module 804 is configured to obtain target sound ray data corresponding to each of the plurality of composite lines according to the plurality of scanned sound ray data, where an angle difference between any two adjacent composite lines in the plurality of composite lines is the same.
The ultrasonic image determining module 806 is configured to perform ultrasonic image generation processing on the plurality of target sound ray data, so as to obtain a target ultrasonic image corresponding to one circle of the inner side of the target object.
In one exemplary embodiment, the synthetic data determination module 804 includes:
an angle threshold value obtaining unit, configured to obtain aperture angle threshold values corresponding to a plurality of imaging points on a target synthetic line, where the target synthetic line is any one synthetic line of the plurality of synthetic lines;
the first sound ray data determining unit is used for determining a first number of first sound ray data corresponding to each imaging point according to the aperture angle threshold, wherein the first sound ray data are the scanning sound ray data with the difference value of the scanning angle and the combined angle of the target synthetic line in the plurality of scanning sound ray data smaller than the angle threshold;
the delay data calculation unit is used for acquiring delay data between the imaging points and the first number of first sound ray data respectively aiming at each imaging point on the target synthetic line if the first number corresponding to the imaging points is not all 1;
the synthetic data acquisition unit is used for determining synthetic data of imaging points according to the first number of first sound ray data and corresponding delay data;
and the target sound ray data determining unit is used for obtaining target sound ray data corresponding to the target synthetic line according to the synthetic data of each imaging point.
In an exemplary embodiment, the angle threshold acquiring unit is configured to acquire, for each imaging point, a depth value corresponding to the imaging point; and determining an aperture angle threshold corresponding to the imaging point according to the depth value.
In an exemplary embodiment, the delay data calculating unit is configured to obtain, for each first sound ray data, a scanning distance between a scanning focal point and an imaging point corresponding to the first sound ray data, and a synthetic distance between a synthetic focal point and the imaging point of the target synthetic line; obtaining a delay value between an imaging point and first sound ray data according to the scanning distance and the synthesized distance; obtaining a delay direction between the imaging point and the first sound ray data according to the depth value of the imaging point and the depth value of the synthesized focus; and obtaining delay data between the imaging point and the first sound ray data according to the delay value and the delay direction.
In an exemplary embodiment, the target sound ray data determining unit is configured to perform, for each imaging point, frequency domain conversion processing on a first number of first sound ray data corresponding to the imaging point, to obtain a first number of frequency domain data corresponding to the imaging point; filtering the first quantity of frequency domain data corresponding to the imaging points to obtain first quantity of filtered data corresponding to the imaging points; obtaining a weight coefficient corresponding to the imaging point according to the first number of frequency domain data and the first number of filtered data; obtaining optimized data corresponding to the imaging points according to the synthesized data corresponding to the imaging points and the weight coefficients corresponding to the imaging points; and obtaining target sound ray data corresponding to the target synthetic line according to the optimized data corresponding to each imaging point.
In an exemplary embodiment, the target sound ray data determining unit is configured to obtain a cut-off frequency corresponding to an imaging point according to a depth value corresponding to the imaging point; and carrying out band-pass filtering processing on the first quantity of frequency domain data corresponding to the imaging points according to the cut-off frequency to obtain first quantity of filtered data.
In an exemplary embodiment, the target sound ray data determining unit is configured to determine the first sound ray data as target sound ray data corresponding to the target synthetic line if the first number of the imaging points is all 1.
The various modules in the ultrasound imaging apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In an exemplary embodiment, a computer device, which may be a terminal, is provided, and an internal structure thereof may be as shown in fig. 9. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an ultrasound imaging method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one exemplary embodiment, an ultrasound system is provided that includes the computer device provided by each of the embodiments above, and a single-element ultrasound probe coupled to the computer device.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use, and processing of the related data are required to meet the related regulations.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (12)

1. A method of ultrasound imaging, the method comprising:
acquiring a plurality of scanning sound line data, wherein the plurality of scanning sound line data are acquired by rotating a scanning unit on the inner side of a target object through a single-element ultrasonic probe;
obtaining target sound ray data corresponding to each synthetic line in a plurality of synthetic lines according to the plurality of scanned sound ray data, wherein the angle difference value between any two adjacent synthetic lines in the plurality of synthetic lines is the same;
And performing ultrasonic image generation processing on the plurality of target sound ray data to obtain a target ultrasonic image corresponding to one circle of the inner side of the target object.
2. The method of claim 1, wherein obtaining target sound ray data corresponding to each of a plurality of composite lines from the plurality of scanned sound ray data comprises:
acquiring aperture angle thresholds corresponding to a plurality of imaging points on a target synthetic line, wherein the target synthetic line is any one synthetic line of the plurality of synthetic lines;
for each imaging point, determining a first number of first sound ray data corresponding to the imaging point according to the aperture angle threshold, wherein the first sound ray data are the scanning sound ray data with the difference value of the scanning angle and the combined angle of the target synthetic line in the plurality of scanning sound ray data smaller than the angle threshold;
if the first number corresponding to the imaging points is not all 1, acquiring delay data between the imaging points and the first number of first sound ray data respectively for each imaging point on the target synthetic line;
determining synthetic data of the imaging point according to the first number of first sound ray data and corresponding delay data;
And obtaining target sound ray data corresponding to the target synthetic line according to the synthetic data of each imaging point.
3. The method according to claim 2, wherein the obtaining the angular threshold value corresponding to each of the plurality of imaging points on the target synthetic line includes:
for each imaging point, acquiring a depth value corresponding to the imaging point;
and determining the aperture angle threshold corresponding to the imaging point according to the depth value.
4. The method of claim 2, wherein the acquiring delay data between the imaging points and the corresponding first number of first sound ray data, respectively, comprises:
for each piece of first sound ray data, acquiring a scanning distance between a scanning focus corresponding to the first sound ray data and the imaging point, and acquiring a synthetic distance between a synthetic focus of the target synthetic line and the imaging point;
obtaining a delay value between the imaging point and the first sound ray data according to the scanning distance and the synthesized distance;
obtaining a delay direction between the imaging point and the first sound ray data according to the depth value of the imaging point and the depth value of the synthesized focus;
And obtaining the delay data between the imaging point and the first sound ray data according to the delay value and the delay direction.
5. The method according to claim 2, wherein the obtaining the target sound ray data corresponding to the target synthesis line according to the synthesized data of each imaging point includes:
performing frequency domain conversion processing on a first number of first sound ray data corresponding to each imaging point to obtain a first number of frequency domain data corresponding to the imaging point;
filtering the first quantity of frequency domain data corresponding to the imaging points to obtain first quantity of filtered data corresponding to the imaging points;
obtaining a weight coefficient corresponding to the imaging point according to the first number of frequency domain data and the first number of filtered data;
obtaining optimized data corresponding to the imaging points according to the synthesized data corresponding to the imaging points and the weight coefficients corresponding to the imaging points;
and obtaining target sound ray data corresponding to the target synthetic line according to the optimized data corresponding to each imaging point.
6. The method of claim 5, wherein filtering the first number of frequency domain data corresponding to the imaging point to obtain the first number of filtered data corresponding to the imaging point comprises:
Obtaining a cut-off frequency corresponding to the imaging point according to the depth value corresponding to the imaging point;
and carrying out band-pass filtering processing on the first quantity of frequency domain data corresponding to the imaging points according to the cut-off frequency to obtain the first quantity of filtered data.
7. The method according to claim 2, wherein the method further comprises:
and if the first quantity corresponding to the imaging points is 1, determining the first sound ray data as target sound ray data corresponding to the target synthetic line.
8. An ultrasound imaging apparatus, the apparatus comprising:
the scanning data acquisition module is used for acquiring a plurality of scanning sound line data, wherein the plurality of scanning sound line data are acquired by rotating a scanning unit on the inner side of a target object through a single-element ultrasonic probe;
the synthetic data determining module is used for obtaining target sound ray data corresponding to each synthetic line in the plurality of synthetic lines according to the plurality of scanned sound ray data, and the angle difference value between any two adjacent synthetic lines in the plurality of synthetic lines is the same;
and the ultrasonic image determining module is used for carrying out ultrasonic image generation processing on the plurality of target sound ray data to obtain a target ultrasonic image corresponding to one circle of the inner side of the target object.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. An ultrasound system, comprising the computer device of claim 9 and a single-element ultrasound probe coupled to the computer device.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
12. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202311579945.9A 2023-11-24 2023-11-24 Ultrasonic imaging methods, apparatus, devices, systems, media, and products Pending CN117338338A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311579945.9A CN117338338A (en) 2023-11-24 2023-11-24 Ultrasonic imaging methods, apparatus, devices, systems, media, and products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311579945.9A CN117338338A (en) 2023-11-24 2023-11-24 Ultrasonic imaging methods, apparatus, devices, systems, media, and products

Publications (1)

Publication Number Publication Date
CN117338338A true CN117338338A (en) 2024-01-05

Family

ID=89365202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311579945.9A Pending CN117338338A (en) 2023-11-24 2023-11-24 Ultrasonic imaging methods, apparatus, devices, systems, media, and products

Country Status (1)

Country Link
CN (1) CN117338338A (en)

Similar Documents

Publication Publication Date Title
EP2085927B1 (en) Constrained iterative blind deconvolution
US8840554B2 (en) Ultrasonic 3-dimensional image reconstruction method and ultrasonic wave system thereof
CN108836392B (en) Ultrasonic imaging method, device and equipment based on ultrasonic RF signal and storage medium
JP2000279416A (en) Three-dimensional imaging method and system
CN103202714B (en) Ultrasonic Diagnostic Apparatus, Medical Image Processing Apparatus, And Medical Image Processing Method
US20130343627A1 (en) Suppression of reverberations and/or clutter in ultrasonic imaging systems
EP3817666A1 (en) Systems and methods for generating and estimating unknown and unacquired ultrasound data
KR101978728B1 (en) Method and device for classifying medical ultrasound image based on deep learning using smart device
CN106651740B (en) A kind of full focus data fast imaging method of ultrasound based on FPGA and system
JPH10171977A (en) Improved computer tomography scanner and method for executing computer tomography
JP2006508729A (en) High frame rate 3D ultrasound imager
CN111248858A (en) Photoacoustic tomography reconstruction method based on frequency domain wavenumber domain
Jirik et al. High-resolution ultrasonic imaging using two-dimensional homomorphic filtering
Singh et al. Synthetic models of ultrasound image formation for speckle noise simulation and analysis
US6306092B1 (en) Method and apparatus for calibrating rotational offsets in ultrasound transducer scans
CN117338338A (en) Ultrasonic imaging methods, apparatus, devices, systems, media, and products
US20180284249A1 (en) Ultrasound imaging system and method for representing rf signals therein
US20230305126A1 (en) Ultrasound beamforming method and device
Gyöngy et al. Experimental validation of a convolution-based ultrasound image formation model using a planar arrangement of micrometer-scale scatterers
Foroozan et al. Wave atom based Compressive Sensing and adaptive beamforming in ultrasound imaging
Ruiter et al. P3a-2 resolution assessment of a 3d ultrasound computer tomograph using ellipsoidal backprojection
Huy et al. An improved distorted born iterative method for reduced computational complexity and enhanced image reconstruction in ultrasound tomography
Jin et al. Does ultrasonic data format matter for deep neural networks?
CN116712101B (en) Ultrasound image generation method, device, computer equipment and storage medium
CN117694924A (en) Ultrasound contrast imaging method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination