CN107847214A - Three-D ultrasonic fluid imaging method and system - Google Patents
Three-D ultrasonic fluid imaging method and system Download PDFInfo
- Publication number
- CN107847214A CN107847214A CN201580081287.8A CN201580081287A CN107847214A CN 107847214 A CN107847214 A CN 107847214A CN 201580081287 A CN201580081287 A CN 201580081287A CN 107847214 A CN107847214 A CN 107847214A
- Authority
- CN
- China
- Prior art keywords
- image data
- ultrasonic
- dimensional
- volume
- velocity vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000012530 fluid Substances 0.000 title claims abstract description 367
- 238000003384 imaging method Methods 0.000 title claims abstract description 115
- 238000002604 ultrasonography Methods 0.000 claims abstract description 340
- 239000013598 vector Substances 0.000 claims abstract description 313
- 238000012545 processing Methods 0.000 claims abstract description 101
- 238000005516 engineering process Methods 0.000 claims abstract description 82
- 239000000523 sample Substances 0.000 claims abstract description 57
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 22
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 21
- 238000009877 rendering Methods 0.000 claims abstract description 13
- 238000000034 method Methods 0.000 claims description 120
- 230000000694 effects Effects 0.000 claims description 60
- 230000017531 blood circulation Effects 0.000 claims description 58
- 230000008569 process Effects 0.000 claims description 37
- 230000000007 visual effect Effects 0.000 claims description 36
- 230000033001 locomotion Effects 0.000 claims description 35
- 239000003550 marker Substances 0.000 claims description 27
- 239000003086 colorant Substances 0.000 claims description 25
- 239000011521 glass Substances 0.000 claims description 20
- 230000011218 segmentation Effects 0.000 claims description 20
- 230000008859 change Effects 0.000 claims description 18
- 238000009826 distribution Methods 0.000 claims description 18
- 238000005096 rolling process Methods 0.000 claims description 13
- 238000002592 echocardiography Methods 0.000 claims description 12
- 230000000644 propagated effect Effects 0.000 claims description 9
- 230000002194 synthesizing effect Effects 0.000 claims description 6
- 239000000203 mixture Substances 0.000 claims description 2
- 230000005540 biological transmission Effects 0.000 description 64
- 238000010586 diagram Methods 0.000 description 36
- 239000007787 solid Substances 0.000 description 27
- 210000001519 tissue Anatomy 0.000 description 27
- 238000004364 calculation method Methods 0.000 description 21
- 210000004204 blood vessel Anatomy 0.000 description 19
- 239000004973 liquid crystal related substance Substances 0.000 description 16
- 238000006243 chemical reaction Methods 0.000 description 15
- 238000012285 ultrasound imaging Methods 0.000 description 14
- 230000003287 optical effect Effects 0.000 description 11
- 238000006073 displacement reaction Methods 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 8
- 230000004888 barrier function Effects 0.000 description 7
- 238000001914 filtration Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 6
- 238000013507 mapping Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 5
- IJGRMHOSHXDMSA-UHFFFAOYSA-N Atomic nitrogen Chemical compound N#N IJGRMHOSHXDMSA-UHFFFAOYSA-N 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 239000000126 substance Substances 0.000 description 4
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 239000007789 gas Substances 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 239000002245 particle Substances 0.000 description 3
- 238000005507 spraying Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 2
- 210000001367 artery Anatomy 0.000 description 2
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 2
- 210000000746 body region Anatomy 0.000 description 2
- 210000003743 erythrocyte Anatomy 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 229910052757 nitrogen Inorganic materials 0.000 description 2
- 239000001301 oxygen Substances 0.000 description 2
- 229910052760 oxygen Inorganic materials 0.000 description 2
- 210000004872 soft tissue Anatomy 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- QNRATNLHPGXHMA-XZHTYLCXSA-N (r)-(6-ethoxyquinolin-4-yl)-[(2s,4s,5r)-5-ethyl-1-azabicyclo[2.2.2]octan-2-yl]methanol;hydrochloride Chemical compound Cl.C([C@H]([C@H](C1)CC)C2)CN1[C@@H]2[C@H](O)C1=CC=NC2=CC=C(OCC)C=C21 QNRATNLHPGXHMA-XZHTYLCXSA-N 0.000 description 1
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 206010010071 Coma Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000000889 atomisation Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000001124 body fluid Anatomy 0.000 description 1
- 239000010839 body fluid Substances 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000005684 electric field Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 210000003722 extracellular fluid Anatomy 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000005669 field effect Effects 0.000 description 1
- 238000005206 flow analysis Methods 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000000968 intestinal effect Effects 0.000 description 1
- 238000001361 intraarterial administration Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 210000002751 lymph Anatomy 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/06—Measuring blood flow
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Hematology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
A kind of three-D ultrasonic fluid imaging method and ultrasonic image-forming system, its system include:Pop one's head in (1);Radiating circuit (2), for encouraging above-mentioned probe to scanning objective emission body ultrasonic beam;Receiving circuit (4) and Beam synthesis module (5), for receiving the echo of the body ultrasonic beam, obtain body ultrasound echo signal;Data processing module (9), for according to above-mentioned body ultrasound echo signal, the at least one of three dimensional ultrasonic image data of above-mentioned scanning target is obtained, and is based on above-mentioned body ultrasound echo signal, obtains the fluid velocity vectors information of target point in above-mentioned scanning target;3D rendering processing module (11), the fluid velocity vectors information for marking target point in above-mentioned three dimensional ultrasonic image data form fluid velocity vectors mark, obtain the volumetric image data for including fluid velocity vectors mark;Anaglyph generation module (12), for above-mentioned volumetric image data to be converted into two-way parallax image data;And display screen display device (8), for receiving above-mentioned two-way parallax image data and showing to form 3D ultrasonoscopys.It has provided the user 3D ultrasonoscopys by 3D display technology and shown.
Description
The invention relates to a fluid information imaging display technology in an ultrasonic system, in particular to a three-dimensional ultrasonic fluid imaging method and an ultrasonic imaging system.
In medical ultrasound imaging devices, the usual fluid display technology is based only on the display of two-dimensional images. In the case of blood flow imaging, ultrasonic waves are radiated into an object to be examined, and color doppler blood flow meters, like pulse wave and continuous wave doppler, also use the doppler effect between red blood cells and ultrasonic waves to realize imaging. The color Doppler blood flow instrument comprises a two-dimensional ultrasonic imaging system, a pulse Doppler (one-dimensional Doppler) blood flow analysis system, a continuous wave Doppler blood flow measurement system and a color Doppler (two-dimensional Doppler) blood flow imaging system. The oscillator generates two orthogonal signals with a phase difference of pi/2, the two orthogonal signals are respectively multiplied by Doppler blood flow signals, the product is converted into a digital signal through an analog/digital (A/D) converter, and the digital signal is filtered by a comb filter to remove low-frequency components generated by the vascular wall or valve and the like and then is sent to an autocorrelator for autocorrelation detection. Since each sample contains doppler blood flow information generated by many red blood cells, a mixed signal of a plurality of blood flow velocities is obtained after autocorrelation detection. The autocorrelation detection result is sent to a velocity calculator and a variance calculator to obtain an average velocity, and the average velocity, the blood flow frequency spectrum information after FFT processing and two-dimensional image information are stored in a Digital Scanning Converter (DSC). Finally, according to the direction and speed of blood flow, the color processor takes the blood flow data as pseudo-color code, and sends the pseudo-color code to the color display for display, thereby completing the color Doppler blood flow display.
By the color Doppler blood flow display technology, only the size and the direction of the blood flow speed on the scanning plane and the flow mode in the blood flow are displayed, and the flow mode is not only laminar flow. Often, more complex flow conditions, such as eddies, are present in the stenosed artery. Two-dimensional ultrasound scanning can only reflect the magnitude and direction of the velocity of blood flow on the scan plane. The display technology based on the ultrasonic two-dimensional image cannot truly reproduce the flowing condition of liquid in any tubular or liquid-stored organ such as a blood vessel, and the display technology based on the two-dimensional image is usually isolated by a plurality of sections or a pseudo three-dimensional image reproduced by a plurality of sections, which cannot provide more, more comprehensive and accurate detection image information for a doctor. Therefore, there is a need to provide a more intuitive fluid information display scheme based on the current improvement of the fluid imaging technology.
Disclosure of Invention
Accordingly, there is a need to provide a three-dimensional ultrasonic fluid imaging method and an ultrasonic imaging system, which provide a more intuitive fluid information display scheme and provide a better viewing angle for a user, in view of the shortcomings in the prior art.
An embodiment of the present invention provides a three-dimensional ultrasound fluid imaging method, including:
emitting an ultrasonic beam toward a scan target;
receiving the echo of the ultrasonic beam of the body to obtain an ultrasonic echo signal of the body;
acquiring three-dimensional ultrasonic image data of at least one part of the scanning target according to the volume ultrasonic echo signal;
obtaining fluid velocity vector information of a target point in the scanning target based on the volume ultrasonic echo signal;
marking fluid velocity vector information of a target point in the three-dimensional ultrasonic image data to form a fluid velocity vector identifier, and obtaining volume image data containing the fluid velocity vector identifier;
converting the volume image data into two paths of parallax image data;
and outputting and displaying the two paths of parallax image data.
A method of three-dimensional ultrasonic fluid imaging, comprising:
emitting an ultrasonic beam toward a scan target;
receiving the echo of the ultrasonic beam of the body to obtain an ultrasonic echo signal of the body;
obtaining enhanced three-dimensional ultrasonic image data of at least one part of the scanning target by a gray scale blood flow imaging technology according to the volume ultrasonic echo signal;
segmenting an interested area used for representing a fluid area in the enhanced three-dimensional ultrasonic image data to obtain a cloud-shaped cluster area block;
marking the cloud-shaped cluster region block-shaped cluster body composition in the three-dimensional ultrasonic image data to obtain volume image data containing the cluster body;
converting the volume image data into two paths of parallax image data;
and outputting and displaying the two paths of parallax image data so that the cluster body presents a rolling visual effect changing along with time when being output and displayed.
A three-dimensional ultrasonic fluid imaging system, comprising:
a probe;
a transmitting circuit for exciting the probe to scan an ultrasonic beam of a target emitter;
the receiving circuit and the beam synthesis module are used for receiving the echo of the body ultrasonic beam and obtaining a body ultrasonic echo signal;
the data processing module is used for acquiring three-dimensional ultrasonic image data of at least one part of the scanning target according to the volume ultrasonic echo signal and acquiring fluid velocity vector information of a target point in the scanning target based on the volume ultrasonic echo signal;
the 3D image processing module is used for marking fluid velocity vector information of a target point in the three-dimensional ultrasonic image data to form a fluid velocity vector identifier and obtaining volume image data containing the fluid velocity vector identifier;
the parallax image generation module is used for converting the volume image data into two paths of parallax image data; and
and the display screen display device is used for receiving and displaying the two paths of parallax image data.
A three-dimensional ultrasonic fluid imaging system, comprising:
a probe;
a transmitting circuit for exciting the probe to scan an ultrasonic beam of a target emitter;
the receiving circuit and the beam synthesis module are used for receiving the echo of the body ultrasonic beam and obtaining a body ultrasonic echo signal;
the data processing module is used for obtaining enhanced three-dimensional ultrasonic image data of at least one part of the scanning target through a gray scale blood flow imaging technology according to the volume ultrasonic echo signal;
the 3D image processing module is used for segmenting an interested region used for representing a fluid region in the enhanced three-dimensional ultrasonic image data to obtain a cloud-shaped cluster region block, marking the cloud-shaped cluster region block in the three-dimensional ultrasonic image data and obtaining volume image data containing the cloud-shaped cluster;
the parallax image generation module is used for converting the volume image data into two paths of parallax image data;
and the display screen display device is used for outputting and displaying the two paths of parallax image data so as to enable the cluster body to present a rolling visual effect changing along with time when being output and displayed.
The invention provides an ultrasonic fluid imaging method and system based on a 3D display technology, which can realize the observation effect of a 3D ultrasonic image by human eyes through a display screen, can completely show the fluid motion condition during display and provide more observation visual angles for an observer.
FIG. 1 is a block diagram schematic of an ultrasound imaging system of one embodiment of the present invention;
FIG. 2 is a schematic view of a vertically emitted planar ultrasound beam of one embodiment of the present invention;
FIG. 3 is a schematic view of deflecting a transmitted planar ultrasound beam in accordance with one embodiment of the present invention;
FIG. 4 is a schematic view of a focused ultrasound beam of one embodiment of the present invention;
FIG. 5 is a schematic view of a divergent ultrasound beam in one embodiment of the present invention;
fig. 6(a) is a schematic diagram of an array element of a two-dimensional area array probe, fig. 6(b) is a schematic diagram of three-dimensional image scanning performed along a certain ultrasound propagation direction by using the two-dimensional area array probe in the present invention, and fig. 6(c) is a schematic diagram of a measurement mode of a relative offset of a scanning body in fig. 6 (b);
fig. 7(a) is a schematic diagram of element partition of a two-dimensional area array probe in an embodiment of the invention, and fig. 7(b) is a schematic diagram of volume focusing ultrasonic emission in an embodiment of the invention;
fig. 8(a) is a schematic flow chart of a method for displaying velocity vector identifiers according to an embodiment of the present invention, and fig. 8(b) is a schematic flow chart of a method for displaying cluster bodies according to an embodiment of the present invention;
FIG. 9 is a schematic flow chart of a method according to one embodiment of the present invention;
FIG. 10 is a schematic flow chart of a method according to one embodiment of the present invention;
FIG. 11(a) is a schematic diagram of fluid velocity vector information calculation in a first mode according to one embodiment of the present invention;
FIG. 11(b) is a schematic diagram illustrating calculation of fluid velocity vector information in a second mode according to an embodiment of the present invention;
FIG. 12(a) is a schematic illustration of two ultrasonic propagation direction transmissions in one embodiment of the present invention;
FIG. 12(b) is a schematic diagram synthesized based on the fluid velocity vector information shown in FIG. 12 (a);
FIG. 12(c) is a schematic illustration of a fluid velocity vector calculated for a spot in one embodiment of the present invention;
FIG. 12(d) is a schematic diagram of 8-point interpolation in one embodiment of the present invention;
FIG. 13(a) is a diagram illustrating a first effect of volume image data according to an embodiment of the invention;
FIG. 13(b) is a diagram illustrating a second effect of volume image data according to an embodiment of the invention;
FIG. 14 is a diagram illustrating a third effect of volume image data according to an embodiment of the invention;
FIG. 15 is a schematic structural diagram of a spatial stereoscopic display device according to an embodiment of the invention;
FIG. 16 is a schematic structural diagram of a spatial stereoscopic display device according to an embodiment of the invention;
FIG. 17 is a schematic structural diagram of a spatial stereoscopic display device according to an embodiment of the invention;
FIG. 18 is a diagram illustrating an effect of volume image data based on a first mode according to an embodiment of the present invention;
FIG. 19 is a diagram illustrating an effect of volume image data based on a second mode according to an embodiment of the present invention;
FIG. 20 is a diagram illustrating a third effect of volume image data according to an embodiment of the invention;
fig. 21(a) is a schematic diagram of an imaging effect of a cloud-shaped cluster in one embodiment of the present invention, fig. 21(b) is a schematic diagram of an imaging effect of a cloud-shaped cluster superimposed with a blood flow velocity vector mark in one embodiment of the present invention, and fig. 21(c) is a schematic diagram of an effect of a cloud-shaped cluster superimposed with color information in one embodiment of the present invention;
FIG. 22 is a diagram illustrating the effect of a target point being selected to form a trajectory according to an embodiment of the present invention;
fig. 23 is a schematic diagram illustrating conversion of volume image data into two paths of parallax images according to an embodiment of the present invention;
fig. 24 is a schematic diagram of converting volume image data into two paths of parallax images according to another embodiment of the present invention;
FIG. 25 is a schematic structural diagram illustrating a human-computer interaction method according to an embodiment of the present invention;
FIG. 26 is a diagram illustrating parallax image translation using a virtual camera according to an embodiment of the present invention;
fig. 27(a) is an effect diagram of a virtual 3D ultrasound image displayed with a rolling cluster body over time when two paths of parallax images are output and displayed according to an embodiment of the present invention;
fig. 27(b) is an effect diagram of a virtual 3D ultrasound image observed with naked eyes when two-way parallax images are output and displayed, in which a blood flow velocity vector marker that changes with time in a flowing state is displayed.
Fig. 1 is a block diagram illustrating an ultrasound imaging system according to an embodiment of the present invention. As shown in fig. 1, the ultrasound imaging system generally includes: the device comprises a probe 1, a transmitting circuit 2, a transmitting/receiving selection switch 3, a receiving circuit 4, a beam forming module 5, a signal processing module 6, an image processing module 7 and a display screen display device 8.
In the ultrasonic imaging process, the transmission circuit 2 transmits a delay-focused transmission pulse having a certain amplitude and polarity to the probe 1 through the transmission/reception selection switch 3. The probe 1 is excited by the transmission pulse, transmits an ultrasonic wave to a scanning target (for example, an organ, a tissue, a blood vessel, etc. in a human body or an animal body, not shown in the figure), receives an ultrasonic echo with information of the scanning target, which is reflected from a target region, after a certain time delay, and converts the ultrasonic echo back into an electric signal again. The receiving circuit receives the electric signals generated by the conversion of the probe 1, obtains the body ultrasonic echo signals, and sends the body ultrasonic echo signals to the beam forming module 5. The beam synthesis module 5 performs focusing delay, weighting, channel summation and other processing on the body ultrasonic echo signal, and then sends the body ultrasonic echo signal to the signal processing module 6 for related signal processing.
The volume ultrasonic echo signals processed by the signal processing module 6 are sent to the image processing module 7. The image processing module 7 processes the signals differently according to different imaging modes required by a user, and obtains image data of different modes, such as two-dimensional image data and three-dimensional ultrasonic image data. Then, ultrasonic image data of different modes, such as two-dimensional image data including a B image, a C image, a D image and the like, and three-dimensional ultrasonic image data which can be sent to a display device for displaying a three-dimensional image or a 3D stereoscopic image are formed through processing such as logarithmic compression, dynamic range adjustment, digital scan conversion and the like.
The image processing module 7 sends the generated three-dimensional ultrasound image data to the 3D image processing module 11 for marking, segmenting, and the like to obtain volume image data, which is a single frame image or a multi-frame image with volume pixel information.
The volume image data passes through the parallax image generation module 12 to obtain two paths of parallax image data, the two paths of parallax image data are displayed on the display screen display device 8, and the display screen display device 8 reconstructs an image displayed on the display screen display device 8 by human eyes by using the parallax of the left and right eyes based on a 3D display technology to obtain a virtual 3D stereoscopic image (hereinafter referred to as a 3D ultrasonic image) of a scanning target. The display screen display device 8 is classified into two types, i.e., a glasses type display device and a naked eye type display device. The glasses type display equipment is realized by matching a flat display screen with 3D glasses together. The naked eye type display equipment, namely a naked eye 3D display, is composed of a 3D stereoscopic reality terminal, playing software, manufacturing software and application technology, and is an intersection stereoscopic reality system integrating modern high-tech technologies such as optics, photography, an electronic computer, automatic control, software, 3D animation manufacturing and the like.
The signal processing module 6 and the image processing module 7 may be implemented by using one processor or a plurality of processors, and the 3D image processing module 11 may be implemented by using one processor or a plurality of processors in an integrated manner with the signal processing module 6 and the image processing module 7, or a separate processor may be provided to implement the 3D image processing module 11. The parallax image generation module 12 may be implemented by a pure software program, or may also be implemented by hardware in combination with a software program, which will be described in detail below.
The probe 1 typically comprises an array of a plurality of array elements. At each transmission of the ultrasound wave, all or a part of all the elements of the probe 1 participate in the transmission of the ultrasound wave. At this time, each array element or each part of array elements participating in ultrasonic wave transmission is excited by the transmission pulse and respectively transmits ultrasonic waves, the ultrasonic waves respectively transmitted by the array elements are superposed in the transmission process to form a synthesized ultrasonic wave beam transmitted to a scanning target, and the direction of the synthesized ultrasonic wave beam is the ultrasonic wave transmission direction mentioned in the text. The array elements participating in ultrasonic wave transmission can be simultaneously excited by the transmission pulse; alternatively, there may be a delay between the times at which the elements participating in the ultrasound transmission are excited by the transmit pulse. The propagation direction of the above-mentioned composite ultrasound beam can be changed by controlling the time delay between the times at which the array elements participating in the transmission of the ultrasound wave are excited by the transmit pulse, as will be explained in detail below.
By controlling the time delay between the times at which the array elements participating in the transmission of the ultrasonic waves are excited by the transmission pulse, the ultrasonic waves transmitted by the respective array elements participating in the transmission of the ultrasonic waves can also be made not to be focused and not to be completely dispersed during propagation, but to form plane waves which are substantially planar as a whole. Such an afocal plane wave is referred to herein as a "plane ultrasound beam".
Alternatively, by controlling the time delay between the times at which the array elements participating in the transmission of the ultrasound wave are excited by the transmit pulse, the ultrasound beams transmitted by the respective array elements may be superimposed at a predetermined position such that the intensity of the ultrasound wave is maximized at the predetermined position, i.e. the ultrasound waves transmitted by the respective array elements are "focused" at the predetermined position, which focused predetermined position is referred to as a "focal point", such that the resulting ultrasound beam obtained is a beam focused at the focal point, referred to herein as a "focused ultrasound beam". For example, fig. 4 is a schematic diagram of transmitting a focused ultrasound beam. Here, the elements participating in the transmission of the ultrasound wave (in fig. 4, only a part of the elements in the probe 1 participate in the transmission of the ultrasound wave) operate with a predetermined transmission delay (i.e., a predetermined delay exists between the times at which the elements participating in the transmission of the ultrasound wave are excited by the transmission pulse), and the ultrasound wave transmitted by each element is focused at the focal point to form a focused ultrasound beam.
Or, by controlling the time delay between the time when the array elements participating in the transmission of the ultrasonic wave are excited by the transmission pulse, the ultrasonic wave transmitted by each array element participating in the transmission of the ultrasonic wave is dispersed in the propagation process to form a generally dispersed wave as a whole. This divergent form of ultrasound is referred to herein as a "divergent ultrasound beam". Such as the divergent ultrasound beam shown in fig. 5.
A plurality of array elements which are linearly arranged simultaneously excite an electric pulse signal, each array element simultaneously emits ultrasonic waves, and the propagation direction of the synthesized ultrasonic waves is consistent with the normal direction of the array element arrangement plane. For example, in the case of a plane wave transmitted vertically as shown in fig. 2, there is no time delay between the array elements participating in the transmission of the ultrasonic wave (i.e., there is no time delay between the times when the array elements are excited by the transmit pulse), and the array elements are excited simultaneously by the transmit pulse. The generated ultrasonic beam is a plane wave, i.e., a plane ultrasonic beam, and the propagation direction of the plane ultrasonic beam is substantially perpendicular to the surface of the probe 1 from which the ultrasonic wave is emitted, i.e., the angle between the propagation direction of the synthesized ultrasonic beam and the normal direction of the array element arrangement plane is zero degrees. However, if the excitation pulse applied to each array element has a time delay, and each array element sequentially emits the ultrasonic beam according to the time delay, the propagation direction of the synthesized ultrasonic beam has a certain angle with the normal direction of the array element arrangement plane, that is, the deflection angle of the synthesized beam, and by changing the time delay, the magnitude of the deflection angle of the synthesized beam and the deflection direction in the scanning plane of the synthesized beam with respect to the normal direction of the array element arrangement plane can be adjusted. For example, fig. 3 shows a deflection of a transmitted plane wave, in which there is a predetermined time delay between the array elements participating in the transmission of the ultrasonic wave (i.e. there is a predetermined time delay between the times at which the array elements are excited by the transmit pulse), and the array elements are excited by the transmit pulse in a predetermined sequence. The generated ultrasonic beam is a plane wave, i.e., a plane ultrasonic beam, and the propagation direction of the plane ultrasonic beam is at an angle (e.g., angle a in fig. 3) to the normal direction of the array element arrangement plane of the probe 1, i.e., the deflection angle of the plane ultrasonic beam. By varying the delay time, the magnitude of the angle a can be adjusted.
Similarly, whether it is a planar ultrasonic beam, a focused ultrasonic beam or a divergent ultrasonic beam, the "deflection angle" of the synthesized beam formed between the direction of the synthesized beam and the normal direction of the array element arrangement plane, which may be the above-mentioned planar ultrasonic beam, focused ultrasonic beam or divergent ultrasonic beam, etc., can be adjusted by adjusting the time delay between the times at which the array elements involved in the transmission of the ultrasonic wave are excited by the transmission pulse.
In addition, when three-dimensional ultrasonic imaging is implemented, as shown in fig. 6(a), an area array probe is adopted, each area array probe is regarded as a plurality of array elements 112 which are arranged and formed according to two directions, a corresponding delay control line is configured corresponding to each array element in the area array probe for adjusting the time delay of each array element, and as long as different time delay time of each array element is changed in the process of transmitting and receiving ultrasonic beams, sound beam control and dynamic focusing can be performed on the ultrasonic beams, so that the propagation direction of the synthesized ultrasonic beams is changed, scanning of the ultrasonic beams in a three-dimensional space is implemented, and a three-dimensional ultrasonic image database is formed. As shown in fig. 6(b), the area array probe 1 includes a plurality of array elements 112, and by changing the time delay time corresponding to the array elements involved in the ultrasonic wave emission, the emitted volume ultrasonic beam can be made to propagate along the direction indicated by the dashed arrow F51, and a swept volume a 1(a solid structure drawn by a dashed line in fig. 6 (b)) for acquiring three-dimensional ultrasonic image data is formed in a three-dimensional space, where the swept volume a1 has a predetermined offset with respect to a reference volume a 2(a solid structure drawn by a solid line in fig. 6 (b)), where the reference volume a2 is: an ultrasonic beam emitted from an array element involved in ultrasonic wave emission is propagated in the direction of the normal line (solid arrow F52 in fig. 6 (b)) of the array element arrangement plane, and forms a swept volume a2 in a three-dimensional space. It can be seen that the above-mentioned scan body a1 has an offset relative to the reference body a2, which is used to measure the deflection angle of the scan body formed by propagation along different ultrasonic propagation directions in a three-dimensional space relative to the reference body a2, and the offset can be measured by combining the following two angles: first, in a scanning body, the propagation direction of an ultrasonic beam on a scanning plane a21 (a quadrangle drawn by a broken line in fig. 6(b) formed by the ultrasonic beam has a predetermined deflection angle Φ selected within a range of [0, 90 °) from the normal line of the array element arrangement plane; secondly, as shown in fig. 6(c), in the rectangular plane coordinate system on the array element arrangement plane P1, the rotation angle θ formed from the counterclockwise rotation of the X-axis to the straight line where the projection P51 (the dotted arrow in the plane P1 in fig. 6 (c)) of the propagation direction of the ultrasonic beam on the array element arrangement plane P1 is located is selected within the range of [0, 360 °. When the deflection angle Φ is zero, the above-described scanning body a1 has an offset amount of zero with respect to the reference body a 2. When three-dimensional ultrasonic imaging is realized, the sizes of the deflection angle phi and the rotation included angle theta can be changed by changing different time delay time of each array element, so that the offset of the scanning body A1 relative to the reference body A2 is adjusted, and different scanning bodies are formed in a three-dimensional space along different ultrasonic propagation directions. The transmission of the scanning body can be replaced by a probe combination structure and the like which are arranged in an array form through a linear array probe, and the transmission mode is the same. For example, in fig. 6(B), the volume ultrasound echo signal returned from the scan volume a1 corresponds to the obtained three-dimensional ultrasound image data B1, and the volume ultrasound echo signal returned from the scan volume a2 corresponds to the obtained three-dimensional ultrasound image data B2.
The ultrasonic beam "emitted toward the scan target to propagate within the space in which the scan target is located to form the above-described swept volume" is herein considered to be a volume ultrasonic beam, which may include a set of one or more emitted ultrasonic beams. Then, depending on the type of the ultrasonic beam, "a planar ultrasonic beam radiated toward the scanning target and propagated in the space where the scanning target is located to form the above-mentioned swept volume" is regarded as a volume-planar ultrasonic beam, "a focused ultrasonic beam radiated toward the scanning target and propagated in the space where the scanning target is located to form the above-mentioned swept volume" is regarded as a volume-focused ultrasonic beam, "a divergent ultrasonic beam radiated toward the scanning target and propagated in the space where the scanning target is located to form the above-mentioned swept volume" is regarded as a volume-divergent ultrasonic beam, and so on, the volume ultrasonic beam may include a volume-planar ultrasonic beam, a volume-focused ultrasonic beam, a volume ultrasonic beam, and so on, the type name of the ultrasonic beam may be used between "volume" and "ultrasonic beam".
The volume plane ultrasonic beam generally covers almost the entire imaging region of the probe 1, and therefore when imaging is performed using the volume plane ultrasonic beam, one frame of three-dimensional ultrasonic image can be obtained by one shot, and thus the imaging frame rate can be high. When the volume focusing ultrasonic beam is used for imaging, because the beam is focused at the focus, only one or a plurality of scanning lines can be obtained in each scanning volume, and all the scanning lines in the imaging area can be obtained after multiple times of emission, thereby combining all the scanning lines to obtain a frame of three-dimensional ultrasonic image of the imaging area. Therefore, the frame rate is relatively low when imaging with a volume focused ultrasound beam. But the capability of each emission of the volume focusing ultrasonic beam is more concentrated and the volume focusing ultrasonic beam is only imaged at the capability concentration, so that the signal-to-noise ratio of the obtained echo signal is high, and the ultrasonic image measurement data of the tissue structure with better quality can be obtained.
Based on the ultrasonic three-dimensional imaging technology and the 3D display technology, the invention provides a better observation visual angle for a user through a display mode of superposing the 3D ultrasonic image and the fluid velocity vector information of the fluid, can know the fluid information such as blood flow velocity, flow direction information and the like at a scanning position in real time, can enable human eyes to observe a more stereoscopic and approximately vivid virtual 3D ultrasonic image, and can stereoscopically reproduce the information of the flowing route of the fluid. The fluids referred to herein may include: body fluids such as blood flow, intestinal fluids, lymph, interstitial fluid, and cellular fluids. Various embodiments of the present invention will be described in detail below with particular reference to the accompanying drawings.
As shown in fig. 8, the present embodiment provides a three-dimensional ultrasound fluid imaging method, which is based on a three-dimensional ultrasound imaging technology, the ultrasound image is displayed on a display screen through a 3D display technology, and a stereoscopic, approximately vivid 3D imaging effect is reproduced through observation of human eyes, so as to provide a better observation angle for a user, provide a more colorful visual enjoyment different from a traditional display mode for the user, so as to clearly understand a real position of a scanning position in real time, and further enable the image display effect to show fluid information more truly, provide a more comprehensive and accurate image analysis result for medical staff, and create a more novel three-dimensional imaging display mode for a fluid imaging display technology implemented on an ultrasound system.
FIG. 8(a) is a schematic flow chart of displaying velocity vector identifiers in a three-dimensional ultrasound fluid imaging method according to an embodiment of the present invention; fig. 8(b) is a schematic flow chart showing a cluster in a three-dimensional ultrasonic fluid imaging method according to an embodiment of the present invention, and some steps of the two are the same and some steps may be included in each other, which may be specifically referred to in the following detailed description.
In step S100, the transmission circuit 2 excites the probe 1 to emit an ultrasonic beam toward the scanning target, and propagates a bulk ultrasonic beam in a space in which the scanning target is located to form a swept volume as shown in fig. 6. In some embodiments of the present invention, the probe 1 is an area array probe, or may also be a probe combination structure arranged in an array form by a linear array probe, and so on. The area array probe or array probe combined structure can ensure that the feedback data of a scanning body can be obtained in time during the same scanning, and the scanning speed and the imaging speed are improved.
The volume ultrasound beam emitted to the scan target herein may include: at least one of a plurality of types of beams such as a volume focused ultrasonic beam, a volume unfocused ultrasonic beam, a volume virtual source ultrasonic beam, a volume undiffracted ultrasonic beam, a volume dispersed ultrasonic beam, or a volume plane ultrasonic beam, or a combination of at least two or more types of beams ("above" herein includes the same number, and the same applies hereinafter). Of course, embodiments of the present invention are not limited to the above types of bulk ultrasound beams.
In some embodiments of the present invention, as shown in fig. 9, the scanning mode using the volume plane wave can save the scanning time of the three-dimensional ultrasound image, and increase the imaging frame rate, thereby implementing the fluid velocity vector imaging with a high frame rate. Therefore, step S101 is included in step S100: a planar ultrasound beam is projected toward a scan target. In step 201, echoes of the volume plane ultrasonic beam are received, and a volume plane ultrasonic echo signal can be obtained, and can be used for reconstructing three-dimensional ultrasonic image data and/or calculating fluid velocity vector information of a target point in a scanning target according to the volume plane ultrasonic echo signal. The fluid velocity vector information referred to herein contains at least the velocity vector (i.e., velocity magnitude and velocity direction) of the target point, and the fluid velocity vector information may also contain corresponding position information of the target point. Of course, the fluid velocity vector information may also include any other information about the velocity of the target point that may be obtained from the magnitude of the velocity and the direction of the velocity, such as acceleration information, etc. For example, in fig. 9, in step 301, three-dimensional ultrasound image data of at least a part of a scan target is acquired from a volume plane ultrasound echo signal; in step S401, fluid velocity vector information of a target point within the scan target is obtained based on the volume plane ultrasound echo signal.
The scan target may be a tubular tissue structure of an organ, a tissue, a blood vessel, etc. in a human or animal body with flowing substances, and the target point in the scan target may be a point or a position of interest in the scan target, which is generally represented by a corresponding position in two paths of parallax image data converted from volume image data based on the scan target displayed on a display screen display device, and this position may correspond to a virtual space point or a virtual space position of interest that can be marked or displayed in the virtual 3D ultrasound image based on an image conversion mapping relationship, and may be a virtual space point or a spatial range of a neighborhood of virtual space points, as follows. Because the 3D ultrasound image is a virtual image, the target point corresponds to a virtual space point or a virtual space position in the 3D ultrasound image, and through the mapping relationship of the space image, the target point corresponds to a corresponding mapping position on the display screen display image, that is, corresponding pixels or neighborhood ranges of the pixels in the two paths of parallax image data, and also corresponds to a volume pixel or a domain range of the volume pixel in the three-dimensional ultrasound image data.
Alternatively, in step S100, the ultrasonic beam may be focused toward the scanning target emitter, and the volume focused ultrasonic beam may be propagated in the space where the scanning target is located to form the scanning volume, so that in step S200, by receiving the echo of the volume focused ultrasonic beam, a volume focused ultrasonic echo signal may be obtained, from which three-dimensional ultrasonic image data may be reconstructed and/or fluid velocity vector information of a target point in the scanning target may be calculated.
Still alternatively, as shown in fig. 10, step S101 and step S102 are included in step S100, that is, in step S101, a planar ultrasonic beam is emitted toward the scan target to receive an echo of the planar ultrasonic beam in step S201, a planar ultrasonic echo signal can be obtained, and in step S401, fluid velocity vector information of a target point within the scan target is obtained based on the planar ultrasonic echo signal. In step S102, an ultrasonic beam is focused on a scanning target emitter to receive an echo of the focused ultrasonic beam in step S202, a focused volume ultrasonic echo signal can be obtained, and three-dimensional ultrasound image data of at least a portion of the scanning target is obtained from the volume focused ultrasonic echo signal in step S302. The volume focusing ultrasonic echo signal can be used for reconstructing high-quality three-dimensional ultrasonic image data so as to acquire the high-quality three-dimensional ultrasonic image data as a background image for representing an organization structure.
If two types of volumetric ultrasonic beams are employed in step S100, the two types of volumetric ultrasonic beams are alternately emitted toward the scanning target. For example, a process of focusing the ultrasonic beam to the scanning target emitter is inserted in the process of scanning the planar ultrasonic beam to the scanning target emitter, that is, step S101 and step S102 as shown in fig. 10 are alternately performed. Therefore, the synchronism of the two kinds of volume ultrasonic beam image data acquisition can be ensured, and the accuracy of the fluid velocity vector information of the target point superposed on the background image is improved.
In step S100, to obtain a volume ultrasound echo signal for calculating fluid velocity vector information of a target point, an ultrasound beam may be emitted toward a scan target according to a doppler imaging technique, for example, an ultrasound beam may be emitted toward the scan target along an ultrasound propagation direction, and the volume ultrasound beam may be propagated within a space in which the scan target is located to form a scan volume. Three-dimensional ultrasound image data for calculating fluid velocity vector information of the target point is then acquired based on the volume ultrasound echo signal fed back from the one swept volume.
Of course, in order to make the calculation result of the fluid velocity vector information of the target point more realistic and more realistic to reproduce the velocity vector of the target point in the real three-dimensional space, in some embodiments of the present invention, the target emitter ultrasound beam may be scanned along a plurality of ultrasound propagation directions to form a plurality of swept volumes, wherein each swept volume is derived from a volume ultrasound beam emitted in one ultrasound propagation direction. Image data to calculate target point fluid velocity vector information is acquired from the volume ultrasound echo signals fed back from the plurality of swept volumes. For example, steps S200 and S400 include:
firstly, receiving echoes from a plurality of scanning on-body ultrasonic beams to obtain a plurality of sets of ultrasonic echo signals;
then, based on a group of body ultrasonic echo signals in the multi-group body ultrasonic echo signals, calculating a velocity component of a target point in a scanning target, and respectively acquiring a plurality of velocity components according to the multi-group body ultrasonic echo signals;
then, the velocity vector of the target point is synthesized and obtained from the plurality of velocity components, and fluid velocity vector information of the target point is generated.
The plurality of ultrasonic propagation directions include two or more ultrasonic propagation directions, "or" above "includes the same number, hereinafter.
In the process of emitting the ultrasonic beam toward the scanning target in the plurality of ultrasonic wave propagation directions, the process of emitting the ultrasonic beam toward the scanning target may be alternately performed in accordance with the difference in the ultrasonic wave propagation directions. For example, if the target emitter is scanned with ultrasonic beams along two ultrasonic propagation directions, the target emitter is scanned with the ultrasonic beams along the first ultrasonic propagation direction, then the target emitter is scanned with the ultrasonic beams along the second ultrasonic propagation direction, a scanning cycle is completed, and finally the scanning cycle process is repeated in sequence. Or, the scanning process may be completed after all the ultrasonic wave propagation directions are sequentially executed, by scanning the ultrasonic wave beam of the target emitter along one ultrasonic wave propagation direction first, and then scanning the ultrasonic wave beam of the target emitter along the other ultrasonic wave propagation direction. To obtain different propagation directions of the ultrasonic waves, the time delay of each array element or each part of array elements participating in ultrasonic wave transmission can be changed, and the explanation of fig. 2 to 6(a) -6 (c) can be specifically referred.
For example, a process of scanning a planar ultrasound beam toward a target emitter along a plurality of ultrasound propagation directions may include: transmitting a first bulk ultrasound beam to a scan target, the first bulk ultrasound beam having a first ultrasound propagation direction; and transmitting a second volumetric ultrasound beam to the scan target, the second volumetric ultrasound beam having a second ultrasound propagation direction. And respectively receiving the echo of the first body ultrasonic beam and the echo of the second body ultrasonic beam, obtaining a first body ultrasonic echo signal and a second body ultrasonic echo signal, obtaining two velocity components according to the two groups of body ultrasonic echo signals, and obtaining the fluid velocity vector of the target point after synthesis. The arrangement of the propagation direction of the ultrasonic wave can be seen in the detailed description of fig. 2. In some of these embodiments, the first and second volumetric ultrasound beams may be planar ultrasound beams, and the corresponding first and second volumetric ultrasound echo signals are changed to first and second volumetric planar ultrasound echo signals.
For another example, the process of scanning the target emitter plane ultrasonic beam along a plurality of ultrasonic propagation directions may further include: scanning an ultrasonic beam of a target emitter along N (N is any one natural number greater than or equal to 3) ultrasonic propagation directions to receive echoes of the ultrasonic beam, and obtaining N groups (N is any one natural number greater than or equal to 3) of volume ultrasonic echo signals, wherein each group of volume ultrasonic echo signals is derived from the ultrasonic beam emitted in one ultrasonic propagation direction. The N sets of volumetric ultrasound echo signals may be used to calculate fluid velocity vector information for the target point.
Furthermore, in some embodiments of the present invention, a volume ultrasound beam may be propagated within a space within a scan target to form the scan volume by exciting some or all of the ultrasound transmit array elements to emit an ultrasound beam toward the scan target along one or more ultrasound propagation directions. For example, the volume ultrasound beam in the present embodiment may be a volume plane ultrasound beam.
Still alternatively, in some embodiments of the present invention, as shown in fig. 7(a) and 7(b), the ultrasonic transmitting array element may be divided into a plurality of array element regions 111, and some or all of the array element regions are excited to emit ultrasonic beams toward the scanning target along one or more ultrasonic propagation directions, so that the bulk ultrasonic beams propagate in the space where the scanning target is located to form scanning volumes, wherein each scanning volume is derived from the bulk ultrasonic beams emitted in one ultrasonic propagation direction. The principle of forming the scan body can be seen from the detailed description of fig. 6(a) -6 (c), which will not be described herein. For example, the volume ultrasound beam in the present embodiment may include one of a volume focused ultrasound beam, a volume plane ultrasound beam, and the like, but is not limited to these types of ultrasound beam. When the volume ultrasonic beam in this embodiment adopts the volume focusing ultrasonic beam, can be divided into behind the polylith array element district with ultrasonic emission array element, encourage one of them array element district and can produce a focusing ultrasonic beam, and encourage polylith array element district simultaneously then can produce many focusing ultrasonic beams simultaneously, form the volume focusing ultrasonic beam, obtain a swept volume. As shown in fig. 7(a) and 7(b), taking the emission of the focused ultrasound beam as an example, each array element region 111 is used to generate at least one focused ultrasound beam (an arc with an arrow in the figure), so when a plurality of array element regions 111 are simultaneously excited to generate the focused ultrasound beam, a plurality of focused ultrasound beams can be propagated in the space where the scanning target is located to form a scanning body 11 formed by a volume focused ultrasound beam, the focused ultrasound beams located in the same plane in the scanning body 11 form a scanning plane 113 (shown by solid arrows, each solid arrow represents a focused ultrasound beam), and the scanning body 11 can also be regarded as being formed by a plurality of scanning planes 113. By changing the time delay of the transmitting array element participating in the transmission of the ultrasonic wave in each array element region 111, the direction of the focused ultrasonic beams can be changed, so that the propagation direction of a plurality of focused ultrasonic beams in the space where the scanning target is located is changed.
In some embodiments of the present invention, a plurality of volumetric ultrasound beams are transmitted along each ultrasound wave propagation direction to a scanning target to obtain a plurality of volumetric ultrasound echo signals for subsequent ultrasound image data processing on the volumetric ultrasound echo signals. For example, a plurality of volume-plane ultrasonic beams are respectively transmitted to the scanning target along a plurality of ultrasonic propagation directions, or a plurality of volume-focused ultrasonic beams are respectively transmitted to the scanning target along one or more ultrasonic propagation directions. And each time of emission of the volume ultrasonic beam correspondingly obtains a volume ultrasonic echo signal.
The process of transmitting the ultrasonic beams of the multiple volumes to the scanning target is alternately executed according to different propagation directions of the ultrasonic waves, so that the velocity vector of a target point at the same moment can be calculated by the obtained echo data, and the calculation precision of the fluid velocity vector information is improved. For example, if the volume ultrasonic beams are respectively transmitted to the scanning target N times along the three ultrasonic propagation directions, the volume ultrasonic beams may be transmitted to the scanning target at least once along the first ultrasonic propagation direction, then the volume ultrasonic beams are transmitted to the scanning target at least once along the second ultrasonic propagation direction, then the volume ultrasonic beams are transmitted to the scanning target at least once along the third ultrasonic propagation direction, a scanning cycle is completed, and finally the scanning cycle process is sequentially repeated until the scanning times in all the ultrasonic propagation directions are completed. The emission times of the ultrasonic beams of the body under different ultrasonic wave propagation directions in the same scanning period can be the same or different. For example, if it is an emitter ultrasound beam in two ultrasound propagation directions, it follows a 1B 1 a 2B 2 A3B 3 a4 B4.. Where Ai is the ith emission in the first ultrasonic propagation direction; bi is the ith emission in the second ultrasound propagation direction. If the ultrasonic wave is an emitter ultrasonic wave beam along three ultrasonic wave propagation directions, the ultrasonic wave is obtained according to A1B 1B 2C 1A 2B 3B 4C 2A 3B 5B 6C3. Where Ai is the ith emission in the first ultrasonic propagation direction; bi is the ith emission in the second ultrasonic propagation direction; ci is the ith transmission in the third ultrasonic propagation direction.
Further, when two types of ultrasonic beams are selected to be emitted to the scanning target in the above step S100, a mode of alternately emitting two types of ultrasonic beams may be adopted. For example, in some embodiments of the present invention, the step S100 includes:
firstly, transmitting a plurality of times of volume focusing ultrasonic beams to a scanning target to acquire reconstructed three-dimensional ultrasonic image data;
then, a plurality of volume plane ultrasonic beams are emitted toward the scanning target along one or more ultrasonic wave propagation directions for acquiring image data for calculating a target point velocity vector.
Based on this, a process of focusing the ultrasonic beam to the scanning target emitter may be inserted in a process of scanning the planar ultrasonic beam to the scanning target emitter. For example, a plurality of times of volume-focused ultrasonic beams emitted toward a scanning target are uniformly inserted into the process of performing the above-described plurality of times of emission of volume-plane ultrasonic beams.
For example, the above-mentioned continuous "Ai Bi Ci" volume-plane ultrasonic beam transmission process is mainly directed to data for obtaining velocity information for calculating a target point, and for the transmission of another type of volume ultrasonic beam for obtaining a reconstructed three-dimensional ultrasonic image, a manner of inserting into the above-mentioned continuous "Ai Bi Ci" transmission process is adopted, and the following explains in detail a manner of alternately transmitting two types of beams by taking the insertion of transmitting a plurality of times of volume focused ultrasonic beams to a scanning target in the above-mentioned continuous "Ai Bi Ci" volume-plane ultrasonic beam transmission process as an example.
A plurality of volume plane ultrasonic beams are respectively transmitted to a scanning target along three ultrasonic propagation directions in the following order,
a 1B 1C 1D 1 a 2B 2C 2D 2 A3B 3C 3D 3.. Ai Bi CiDi, and so on;
where Ai is the ith emission in the first ultrasonic propagation direction; bi is the ith emission in the second ultrasonic propagation direction; ci is the ith transmission in the third ultrasonic propagation direction; di is the ith sub-volume focused ultrasound beam emission.
The above method gives a comparatively simple manner of interposing the transmission process of the volume-focused ultrasonic beam, and may also be interposing the transmission of the volume-focused ultrasonic beam once after transmitting the plurality of times of the volume-plane ultrasonic beam in different ultrasonic wave propagation directions, or alternately performing at least a part of the above transmission of the plurality of times of the volume-plane ultrasonic beam to the scanning target and at least a part of the above transmission of the plurality of times of the volume-focused ultrasonic beam to the scanning target, and so on. It may be any alternate transmission mode that can implement the above-mentioned alternate execution scheme of transmitting at least a part of the plurality of volume-plane ultrasonic beams to the scanning target and transmitting at least a part of the plurality of volume-focused ultrasonic beams to the scanning target. In the embodiment, the three-dimensional ultrasonic image data with better quality can be obtained by utilizing the volume focusing ultrasonic beam; the fluid velocity vector information with high real-time property can be obtained by utilizing the characteristic of high frame rate of the ultrasonic beam of the volume plane, and in order to have better synchronism on data acquisition, two types of ultrasonic wave forms are adopted for alternately emitting.
Therefore, the execution sequence and the rule of transmitting the multiple ultrasonic beams to the scanning target along different ultrasonic propagation directions can be arbitrarily selected, and are not listed, but are not limited to the specific embodiments provided above.
In step S200, the receiving circuit 4 and the beam synthesis module 5 receive the echo of the volumetric ultrasonic beam emitted in step S100 described above, and obtain a volumetric ultrasonic echo signal.
What type of the volumetric ultrasonic beam is used in the above step S100, then the corresponding type of the volumetric ultrasonic echo signal is generated in step S200 according to what type of the echo of the volumetric ultrasonic beam is received. For example, when the echo of the volume focused ultrasonic beam transmitted in step S100 is received, a volume focused ultrasonic echo signal is obtained; when the echo of the volume plane ultrasonic beam emitted in step S100 is received, a volume plane ultrasonic echo signal is obtained, and so on, the name of the type of the ultrasonic beam is preceded between "volume" and "ultrasonic echo signal".
When the receiving circuit 4 and the beam synthesis module 5 receive the echo of the bulk ultrasonic beam transmitted in the step S100, the echo of the bulk ultrasonic beam transmitted in the step S100 may be received when the transmitting and receiving functions are implemented by using each array element or each part of array elements participating in ultrasonic wave transmission, or the array elements on the probe are divided into a receiving part and a transmitting part, and then the echo of the bulk ultrasonic beam transmitted in the step S100 is received by using each array element or each part of array elements participating in ultrasonic wave reception, and so on. For the reception of the ultrasonic beam of the body and the acquisition of the echo signal of the body ultrasonic, see the common mode in the field.
When the ultrasonic beam is emitted in each ultrasonic wave propagation direction in step S100, the echo of the bulk ultrasonic beam is received in step S200, and a set of bulk ultrasonic echo signals is obtained correspondingly. For example, when receiving the echo of the volume ultrasonic beam emitted toward the scanning target along an ultrasonic wave propagation direction in step S100, a set of volume ultrasonic echo signals is obtained in step S200, and correspondingly, in step S300 and step S400, three-dimensional ultrasonic image data of at least a part of the scanning target and fluid velocity vector information of the target point are respectively obtained according to the corresponding set of volume ultrasonic echo signals; when echoes of the bulk ultrasonic beam emitted toward the scanning target along a plurality of ultrasonic wave propagation directions are received in step S200, sets of bulk ultrasonic echo signals each of which is derived from an echo of the bulk ultrasonic beam emitted in one ultrasonic wave propagation direction are obtained in step S200. Then, correspondingly in step S300 and step S400, three-dimensional ultrasound image data of at least a part of the scanning target is acquired according to the one set of the volume ultrasound echo signals, and fluid velocity vector information of the target point can be acquired through the multiple sets of the volume ultrasound echo signals.
Furthermore, when a plurality of times of volume ultrasonic beams can be transmitted along each ultrasonic wave propagation direction, the echo of the volume ultrasonic beam is received in step S200, and a plurality of times of volume ultrasonic echo signals are included in the correspondingly obtained set of volume ultrasonic echo signals, wherein one time of volume ultrasonic beam transmission corresponds to one time of volume ultrasonic echo signal acquisition.
For example, for the case where a plurality of ultrasonic wave propagation directions are respectively emitted to the scanning target in step S100 to a plurality of times of the volume plane ultrasonic wave beam, in step S200, the echoes of the volume plane ultrasonic wave beams corresponding to the plurality of ultrasonic wave propagation directions may be respectively received to obtain a plurality of sets of volume plane ultrasonic echo signals; wherein each set of volume plane ultrasound echo signals comprises a plurality of volume plane ultrasound echo signals, each time a volume plane ultrasound echo signal is derived from an echo obtained by performing the step of scanning a target emitter plane ultrasound beam once in one ultrasound wave propagation direction.
For another example, when the volume focused ultrasonic beam is transmitted to the scan target a plurality of times in step S100, the echoes of the volume focused ultrasonic beam are received in step S200, and a plurality of sets of volume focused ultrasonic echo signals are obtained.
Therefore, what type of the volume ultrasonic beam is used to transmit the corresponding number of times in step S100, and then what type of the echo of the volume ultrasonic beam is received in step S200, a corresponding set number of the corresponding type of the volume ultrasonic echo signals is generated.
In step S300, the image processing module 7 acquires three-dimensional ultrasound image data of at least a part of the scan target from the volumetric ultrasound echo signal. By using 3D beamforming imaging from the volumetric ultrasound echo signals, three-dimensional ultrasound image data B1 and B2 as shown in fig. 6(B) may be obtained, which may include: the position information of the space point and the image information corresponding to the space point, wherein the image information comprises other characteristic information such as the gray attribute, the color attribute and the like of the space point.
In some embodiments of the present invention, the three-dimensional ultrasound image data may be imaged using a volume plane ultrasound beam, and may also be imaged using a volume focused ultrasound beam. However, the capability of each time of emission of the volume focus ultrasonic beam is concentrated, and imaging is only performed at the position with concentrated capability, so that the signal-to-noise ratio of the obtained echo signal is high, the quality of the obtained three-dimensional ultrasonic image data is good, the main lobe of the volume focus ultrasonic beam is narrow, the side lobe of the volume focus ultrasonic beam is low, and the transverse resolution of the obtained three-dimensional ultrasonic image data is high. Therefore, in some embodiments of the present invention, the three-dimensional ultrasound image data of step S500 may be imaged using a volume focused ultrasound beam. Meanwhile, in order to obtain a higher quality three-dimensional ultrasound image data, a plurality of times of emitter focused ultrasound beams may be emitted in step S100 to realize scanning to obtain one frame of three-dimensional ultrasound image data.
Of course, the three-dimensional ultrasound image data is obtained according to the volume plane ultrasound echo signal obtained in the aforementioned step S200. When multiple sets of volumetric ultrasound echo signals are obtained in step S200, a set of volumetric ultrasound echo signals may be selected to obtain three-dimensional ultrasound image data of at least a portion of the scan target. Or obtaining the three-dimensional ultrasonic image data after the image data is optimized based on the multi-group ultrasonic echo signals.
In order to present the overall movement of the fluid in the 3D ultrasound image, in step S300, the method may further include: in step S310 in fig. 8(b), enhanced three-dimensional ultrasound image data of at least a portion of the scan target is obtained by a gray-scale blood flow imaging technique according to the volumetric ultrasound echo signal. Alternatively, step S310 is employed after step S200 in the dynamic display method for the cluster body shown in fig. 8 (b). The gray scale blood flow imaging technology, or two-dimensional blood flow display technology, is a new image technology which utilizes the digital coding ultrasonic technology to observe blood flow, blood vessels and surrounding soft tissues and displays the blood flow, the blood vessels and the surrounding soft tissues in a gray scale mode.
The processing of the three-dimensional ultrasound image data in the above embodiments may be understood as processing the three-dimensional data of the entire three-dimensional ultrasound image database, or may be understood as processing a set of one or more two-dimensional ultrasound image data included in one frame of three-dimensional ultrasound image data, respectively. Therefore, in some embodiments of the present invention, step S310 may include: one or more pieces of two-dimensional ultrasonic image data contained in one frame of three-dimensional ultrasonic image data are respectively processed by a gray scale blood flow imaging technology, and then enhanced three-dimensional ultrasonic image data of a scanning target are collected.
In step S400, the image processing module 7 is configured to obtain fluid velocity vector information of a target point within the scan target based on the volume ultrasound echo signal obtained in step S200. The fluid velocity vector information referred to herein includes the velocity vector (i.e., velocity magnitude and velocity direction) of the target point, and/or corresponding position information of the target point in the three-dimensional ultrasound image data. According to the image mapping relationship of the two paths of parallax image data converted from the three-dimensional ultrasonic image data in step S600, the corresponding position information of the target point in the two paths of parallax image data can be correspondingly obtained according to the corresponding position information of the target point in the three-dimensional ultrasonic image data. On the contrary, the corresponding position information of the target point in the three-dimensional ultrasonic image data can be obtained according to the corresponding position information of the target point in the two paths of parallax image data respectively based on the image mapping relation.
In this embodiment, the target point may be selected by a user, and the distribution density of the target point in the scan target or the position of the target point (including the position of the selected target point or the initial position for calculating the fluid velocity vector of the target point) may be set by acquiring an instruction input by the user through the human-computer interaction device. For example, the distribution density is selected by moving a cursor displayed in the image or by gesture input, a distribution density instruction input by a user is obtained, and the target point is randomly selected in the scanning target according to the distribution density instruction; and/or selecting the position of a target point through a cursor displayed in a moving image or through gesture input, acquiring a mark position instruction input by a user, and acquiring the target point according to the mark position instruction. The target point includes one or more discretely distributed voxels or a range of voxels or a data block, and the distribution density refers to a size of the target point that may appear in a predetermined area range, and the predetermined area range may be an entire stereo area range of the scan target or a partial area range of the scan target, that is, a range of an initial position when calculating a velocity vector of the target point in a second mode described below.
But the invention is not limited thereto. For example, the position of the target point may be randomly selected within the scan target according to a distribution density preset by the system, or the initial position used to calculate the fluid velocity vector at the target point. By the method, a flexible selection mode can be given to the user, and the use experience is improved.
The process of obtaining fluid velocity vector information of a target point within a scan target based on the volumetric ultrasound echo signal, which is included in step 400, will be explained in detail below.
The fluid velocity vector information of the target point calculated and obtained in step S400 is mainly used for marking on the three-dimensional ultrasound image data, so that different fluid velocity vector information can be obtained in step S400 according to different display modes of the fluid velocity vector information.
For example, in some embodiments of the present invention, the step S400 includes: and calculating the fluid velocity vector of the target point at the first display position in the three-dimensional ultrasonic image data at different moments according to the volume ultrasonic echo signal obtained in the step S200, so as to obtain the fluid velocity vector information in the three-dimensional ultrasonic image data at different moments of the target point. Then at the time of image display, there may be fluid velocity vector information at the first display position in the three-dimensional ultrasound image data for each time instant. As shown in fig. 11(a), from the volumetric ultrasound echo signals obtained in step S200, three-dimensional ultrasound image data P1, P2, and three-dimensional ultrasound image data P3526, P2, and three-dimensional ultrasound image data Pn corresponding to times t1, t2, and t. In this embodiment, the first display position in the three-dimensional ultrasound image data corresponding to the target point at each time is always located at a position (X1, Y1, Z1) in the three-dimensional ultrasound image data. Based on this, when the fluid velocity vector information is marked in the subsequent step S500, that is, the fluid velocity vectors correspondingly calculated at different times are marked at the positions (X1, Y1, Z1) in the three-dimensional ultrasound image data. If the target point is selected partially or entirely by the user's own discretion or by default by the system as referred to in the above embodiments, the corresponding first display position may be obtained by correspondence, and the fluid velocity vector information at the first display position in the three-dimensional ultrasound image data corresponding to the current time is calculated for marking, and this display mode is referred to as a first mode hereinafter.
In other embodiments of the present invention, the step S400 includes: and calculating fluid velocity vectors sequentially obtained when the target point continuously moves to corresponding positions in the three-dimensional ultrasonic image data according to the volume ultrasonic echo signals obtained in the step S200, so as to obtain fluid velocity vector information of the target point. In this embodiment, the fluid velocity vectors corresponding to the target point at the respective corresponding positions in the three-dimensional ultrasound image data after the target point moves continuously from the initial position are obtained by repeatedly calculating the fluid velocity vectors of the target point moving from one position to another position in the three-dimensional ultrasound image data within a time interval. That is, the calculated position for determining the fluid velocity vector in the three-dimensional ultrasound image data of the present embodiment can be obtained by calculation. Then, in step S500, the fluid velocity vector information at the calculated position in the three-dimensional ultrasound image data corresponding to each time may be marked.
As shown in fig. 11(b), three-dimensional ultrasound image data P11, P12, and three-dimensional ultrasound image data P1n corresponding to time points t1, t2, and t.9.. and tn can be obtained according to the volume ultrasound echo signals obtained in step S200, and then, with reference to a part or all of the target points selected by the user autonomously in the above embodiment, or a distribution density of the target points defaulted by the system, an initial position of the target point, such as a first point located at (X1, Y1, and Z1) in fig. 11(b), is determined, and then a fluid velocity vector (indicated by an arrow in P11) in the three-dimensional ultrasound image data P11 of the initial position at time point t1 is calculated. Next, the movement of the target point (i.e., black dots in the figure) from the initial position on the three-dimensional ultrasound image data P11 at time t1 to the position (X2, Y2, Z2) on the three-dimensional ultrasound image data P12 at time t2 is calculated, and then the fluid velocity vector at the position (X2, Y2, Z2) in the three-dimensional ultrasound image data P12 is obtained from the volumetric ultrasound echo signal for marking into the three-dimensional ultrasound image data. For example, the fluid velocity vector at the second display position is obtained according to the volume ultrasound echo signal obtained in step S200 after the displacement of the target point at the second time t2 is calculated by moving the fluid velocity vector at the position (X1, Y1, Z1) in the three-dimensional ultrasound image data P11 at time t1 for a time interval (where time t 2-time t1 are time intervals), so that the second display position of the target point at the first time t1 on the three-dimensional ultrasound image data at the second time is found, and the fluid velocity vector at the second display position is obtained, so as to obtain the fluid velocity vector information of the target point in the three-dimensional ultrasound image data P12 at time t 2. And so on, every two adjacent moments move the time interval of the two adjacent moments along the direction of the fluid velocity vector corresponding to the target point at the first moment to obtain the displacement, determining the corresponding position of the target point on the three-dimensional ultrasonic image data at the second moment according to the displacement amount, then obtaining the fluid velocity vector of the corresponding position in the ultrasonic image of the target point moving from the first moment to the second moment according to the volume ultrasonic echo signal, and thus obtaining the blood flow fluid velocity vector information of the target point continuously moving from (X1, Y1, Z1) to (Xn, Yn, Zn) in the three-dimensional ultrasonic image data, thereby obtaining the fluid velocity vector of the corresponding position in the three-dimensional ultrasonic image data of which the target point continuously moves from the initial position to different moments, the method is used for acquiring fluid velocity vector information of a target point and marking the fluid velocity vector information into three-dimensional ultrasonic image data for superposition display.
In the display mode of this embodiment, the movement displacement of the target point at a time interval is calculated, and the corresponding position of the target point in the three-dimensional ultrasound image data is determined according to the displacement, and the target point initially selected moves according to the time interval, which may be determined by the system emission frequency, or may be determined by the display frame rate, or may be a time interval input by the user, and the position reached after the target point moves is calculated according to the time interval input by the user, and then the fluid velocity vector information at the position is obtained for comparison display. Initially, a human-computer interaction device may be used to select N initial target points or set the N initial target points according to default distribution positions or distribution densities, where each initial target point may be identified by a set fluid velocity vector to represent the magnitude and direction of the flow velocity of the point, as shown in fig. 11 (b). In step S500, the fluid velocity vector obtained when the marked target point continuously moves to a corresponding position in the three-dimensional ultrasound image data may form a velocity vector identifier that is flowing along with time, and the fluid velocity vector identifier may be marked in any shape. By calculating the obtained fluid velocity vector information in the manner of the labeled graph 11(b), the change with time can make the velocity vector identifier present a flow-like visual effect with change with time when outputting and displaying, and the arrow of each target point will change its position, so that the movement of the velocity vector identifier, such as a solid arrow, can be used to form a similar visual fluid flow process, so that the user can observe a near-real fluid flow visualization effect, for example, to display the flow process of blood flow in a blood vessel, and this display mode is referred to as a second mode hereinafter.
Based on the user' S own selection, or part or all of the default target points of the system, according to the difference of the emission forms of the volumetric ultrasound beams in the above step S100, in the above embodiments, the fluid velocity vectors at the corresponding positions in the three-dimensional ultrasound image data of the target points in the scanning target at any time can be obtained according to the volumetric ultrasound echo signals in the following ways.
First, blood flow fluid velocity vector information of a target point within a scan target is calculated from a set of volume ultrasound echo signals obtained by emitting an ultrasound beam in one ultrasound propagation direction in step S100. In this process, the fluid velocity vector of the target point at the corresponding position in the volume image data can be obtained by calculating the movement displacement and the movement direction of the target point within the preset time interval.
As described above, in the present embodiment, the volume plane ultrasonic echo signal may be used to calculate the fluid velocity vector information of the target point, and in some embodiments of the present invention, the movement displacement and the movement direction of the target point within the scan target within the preset time interval are calculated based on a set of volume plane ultrasonic echo signals.
In the present embodiment, the method for calculating the fluid velocity vector of the target point at the corresponding position in the volume image data may use a method similar to speckle tracking, or may also use a doppler ultrasound imaging method to obtain the fluid velocity vector of the target point in an ultrasound propagation direction, or may also obtain the velocity component vector of the target point based on the temporal gradient and the spatial gradient at the target point, and so on.
For example, as shown in fig. 12(c), in some embodiments of the present invention, the process of obtaining a fluid velocity vector at a corresponding position in the three-dimensional ultrasound image data of a target point within the scan target according to the volumetric ultrasound echo signal may include the following steps.
First, at least two frames of three-dimensional ultrasonic image data, for example, at least a first frame of three-dimensional ultrasonic image data and a second frame of three-dimensional ultrasonic image data, may be obtained according to the previously obtained volumetric ultrasonic echo signal.
As described previously, the volume plane ultrasonic beam can be employed in the present embodiment to acquire image data for calculating the fluid velocity vector of the target point. The plane ultrasonic beam is spread in the whole imaging area, therefore, a 2D area array probe is adopted to emit a group of volume plane ultrasonic beams with the same angle, 3D beam synthesis imaging is carried out after receiving, and then a frame of three-dimensional ultrasonic image data can be obtained, if the frame rate is 10000, 10000 times of emission are carried out per second, and 10000 pieces of three-dimensional ultrasonic image data can be obtained after one second. Herein, three-dimensional ultrasonic image data of a scan target obtained by performing corresponding processing on a volume plane beam echo signal obtained in correspondence with a volume plane ultrasonic beam is referred to as "volume plane beam echo image data".
Then, a tracked stereo region is selected in the first frame of three-dimensional ultrasound image data, which may contain a target point for which a velocity vector is desired. For example, tracking a stereoscopic region may select an arbitrarily shaped stereoscopic region centered on the target point, such as a cubic region, a small cubic region in fig. 12 (c).
Next, a stereo region corresponding to the tracked stereo region is searched for in the second frame of three-dimensional ultrasound image data, for example, a stereo region having the greatest similarity to the tracked stereo region is searched for as a tracking result region. Here, the measure of similarity may use a measurement method commonly used in the art. For example, the measure of similarity may be a three-dimensional computational model of the correlation using the following formula:
or
Wherein, X1For the first frame of three-dimensional ultrasound image data, X2Is the second frame of three-dimensional ultrasound image data. i, j and k are the three-dimensional coordinates of the image. Indicates the values of A, B and C when the calculation of the equation on the right side thereof reaches the minimum. A, B and C represent new positions. M, N and L are the sizes of the tracking stereo regions. And is an average value in the tracked solid region and the tracked structural region (i.e., the small cubic region in fig. 12(c), the arrow indicates the moving direction of the same cubic region over time) in the first frame of three-dimensional ultrasound image data and the second frame of three-dimensional ultrasound image data.
And finally, obtaining the velocity vector of the target point according to the positions of the tracking stereo region and the tracking result region and the time interval between the first frame of three-dimensional ultrasonic image data and the second frame of three-dimensional ultrasonic image data. For example, the velocity magnitude of the fluid velocity vector may be obtained by dividing the distance between the tracking solid region and the tracking result region (i.e., the movement displacement of the target point within the preset time interval) by the time interval between the first frame body plane beam echo image data and the second frame body plane beam echo image data, and the velocity direction of the fluid velocity vector may be the direction of a line connecting from the tracking solid region to the tracking result region (i.e., the arrow direction in fig. 12 (c)), i.e., the movement direction of the target point within the preset time interval.
In order to improve the accuracy of the speckle tracking method in calculating the fluid velocity vector, wall filtering is performed on each frame of the obtained three-dimensional ultrasonic image data, namely wall filtering is performed on each spatial position point on the three-dimensional ultrasonic image data along the time direction. Tissue signals on the three-dimensional ultrasound image data vary less with time, while fluid signals, such as blood flow signals, vary more due to flow. A high pass filter may thus be employed as a wall filter for fluid signals such as blood flow signals. After wall filtering, the higher frequency fluid signals remain, while the lower frequency tissue signals are filtered out. The signal-to-noise ratio of the fluid signal can be greatly enhanced through the signal after the wall filtering, and the calculation accuracy of the fluid velocity vector can be favorably improved. In this embodiment, the process of wall filtering the acquired three-dimensional ultrasound image data is also applicable to other embodiments.
For another example, in other embodiments of the present invention, a method of obtaining a velocity vector of a target point based on a temporal gradient and a spatial gradient at the target point comprises:
firstly, obtaining at least two frames of three-dimensional ultrasonic image data according to a volume ultrasonic echo signal; or the following steps can be performed after the three-dimensional ultrasonic image data is subjected to wall filtering.
Then, obtaining a gradient along a time direction at the target point according to the three-dimensional ultrasonic image data, and obtaining a first velocity component along an ultrasonic wave propagation direction at the target point according to the three-dimensional ultrasonic image data;
secondly, respectively obtaining a second velocity component along a first direction and a third velocity component along a second direction at a target point according to the gradient and the first velocity component, wherein the first direction, the second direction and the ultrasonic propagation direction are mutually vertical in pairs;
and finally, synthesizing to obtain the fluid velocity vector of the target point according to the first velocity component, the second velocity component and the third velocity component.
In this embodiment, the first direction and the second direction are perpendicular to each other in pairs, and it can be understood that a three-dimensional coordinate system is constructed with the ultrasonic propagation direction as a coordinate axis, for example, the ultrasonic propagation direction is a Z axis, and the rest of the first direction and the second direction are an X axis and a Y axis, respectively.
First, assuming that three-dimensional ultrasound image data after wall filtering is represented as P (x (t), y (t), z (t)), P is derived in the time direction, and the following formula (1) is obtained according to the chain rule:
the second velocity component of the fluid in the X direction, denoted as the third velocity component in the Y direction, denoted as the first velocity component in the Z direction, is then, equation (1) may be changed to equation (2) below:
the gradient of the three-dimensional ultrasonic image data can be obtained by respectively solving the gradient of the three-dimensional ultrasonic image data along the X direction, the Y direction and the Z direction; the result can be obtained by graduating each spatial point on the three-dimensional ultrasonic image data along the time direction according to a plurality of three-dimensional ultrasonic image data.
Then, using a least squares solution to solve, equation (2) can be transformed into the following linear regression equation (3):
wherein, the lower subscript i in the figure represents the calculation result of the gradient of the ith three-dimensional ultrasonic image data along the X, Y and Z directions respectively. And forming a parameter matrix A based on the gradient of each space point calculated for multiple times along the direction of the three-dimensional coordinate axis. There are a total of N calculations and since the time taken for these N calculations is very short, it is assumed that the fluid velocity remains constant during this time. EpsiloniIndicating a random error. Here, equation (3) satisfies the gaussian-markov theorem, and its solution is equation (4) below.
Wherein the parameter matrix
Random error ε according to the Gauss-Markov theoremiThe variance of (A) can be expressed as the following formula (5)
Secondly, based on the above-mentioned relation model of the gradient, velocity values v at different times in the ultrasonic wave propagation direction (i.e., Z direction) at each spatial point are obtained from the doppler ultrasound measurement methodzAnd the average value thereof, and calculating the variance of random errors in the ultrasonic wave propagation direction and a parameter matrix at each spatial point. VDV in equation (6) for a set of velocity values at different times measured by Doppler ultrasoundzIs the average value obtained by the Doppler ultrasonic method,
wherein
Thus based on the random error ε of equation (3)jThe variance of (c) is expressed as the following formula (7).
The solution of the above equation (3) is solved by a weighted least square method using two different variances calculated according to equations (5) and (7), using the variance of the random error in the ultrasonic wave propagation direction at each spatial point and the parameter matrix as known information, as shown in the following equation (8).
Wherein the weighting coefficient O is a zero matrix, IAAnd IBThe order of the identity matrix is corresponding to the number of rows of the matrix A and the matrix B respectively. Wherein the weighting factor is the square root of the inverse of the variance of the random error term in the linear error equation.
Finally, solving to obtain three speeds v vertical to each otherx,vyAnd vzAnd then, obtaining the magnitude and the direction of the vector blood flow velocity through three-dimensional space fitting.
For example, in other embodiments of the invention, a doppler ultrasound imaging method may be used to obtain the fluid velocity vector of the target point, as follows.
In the doppler ultrasound imaging method, a plurality of ultrasonic beams are continuously emitted in the same ultrasonic propagation direction with respect to a scan target; receiving echoes of the transmitted multiple times of volume ultrasonic beams to obtain multiple times of volume ultrasonic echo signals, wherein each value in each time of volume ultrasonic echo signals corresponds to a value on a target point when scanning is carried out in an ultrasonic wave propagation direction; the step S400 includes:
firstly, respectively carrying out Hilbert transformation on the multiple times of volume ultrasonic echo signals along the ultrasonic propagation direction or carrying out IQ demodulation on the echo signals, and obtaining multiple groups of three-dimensional ultrasonic image data representing values on each target point by using a plurality of groups after beam forming; after N times of transmitting and receiving, N complex values changing along the time exist at each target point position, and then the velocity of the target point z in the ultrasonic wave propagation direction is calculated according to the following two formulas:
formula (10)
Where Vz is a calculated velocity value in the ultrasonic wave propagation direction, c is the speed of sound, f0Is the center frequency, T, of the probeprfIs the time interval between two transmissions, N is the number of transmissions, x (i) is the real part on the ith transmission, y (i) is the imaginary part on the ith transmission, for the imaginary part operator, for the real part operator. The above formula is a fixed position flow rate calculation formula.
Secondly, by analogy, the magnitude of the fluid velocity vector at each target point can be found by the N complex values.
Finally, the direction of the fluid velocity vector is the ultrasonic wave propagation direction, i.e. the ultrasonic wave propagation direction corresponding to the multiple times of the volume ultrasonic echo signals.
Generally, in ultrasonic imaging, the moving speed of a scanned object or a moving part therein can be obtained by performing doppler processing on a volume ultrasonic echo signal by using the doppler principle. For example, after the volume ultrasound echo signal is obtained, the motion velocity of the scan target or a moving part therein may be obtained from the volume ultrasound echo signal by an autocorrelation estimation method or a cross-correlation estimation method. The method for doppler processing the volumetric ultrasound echo signal to obtain the motion velocity of the scanned object or the moving part therein may use any method currently used in the art or may be used in the future to calculate the motion velocity of the scanned object or the moving part therein by the volumetric ultrasound echo signal, and will not be described in detail herein.
Of course, the present invention is not limited to the above two methods for the bulk ultrasonic echo signal corresponding to one ultrasonic wave propagation direction, and other methods known in the art or may be adopted in the future may also be adopted.
In addition, when the blood flow velocity vector of the target point in the three-dimensional ultrasound image data is calculated, a new position where a calculation point arrives is probably not a position where the target point is to be calculated, and an interpolation method may be used to obtain, for example, an 8-point interpolation method. As shown in fig. 12(d), it is assumed that a gray point in the middle inside the solid area is a point to be calculated, and 8 black points are positions of the calculated velocity per frame. The distance between each black point (the black point represents the position of each vertex of the three-dimensional area) and the gray point is obtained through a spatial connection line, and then a weight list is obtained according to the distance. The velocity at each black dot is divided into Vx, Vy and Vz, the three directions being perpendicular two by two. And respectively calculating the speed values of the three directions of the red points according to the speeds of the 8 black points in the three directions and the weight values. The magnitude and direction of the velocity of the red spot can thus be obtained. The 8-point interpolation method is based on the cubic structure of the stereo region, but it is needless to say that interpolation calculation may be performed based on cubic regions of other shapes, such as regular tetrahedrons and regular octahedrons. And setting a corresponding interpolation calculation method by defining a three-dimensional area structure of a field space of the target point, so as to calculate the fluid velocity vector of the target point at the position to be calculated according to the fluid velocity vector at the new position where the calculation point arrives.
In the second method, when the ultrasonic beams are emitted along the ultrasonic propagation directions in step S100 to form a plurality of scan volumes, the echoes from the ultrasonic beams on the scan volumes are received to obtain a plurality of sets of ultrasound echo signals, and the fluid velocity vector information of the target point in the scan target is calculated according to the sets of ultrasound echo signals. In the process, firstly, based on one group of ultrasonic echo signals in the multi-group of ultrasonic echo signals, calculating a velocity vector of a target point in a scanning target at a corresponding position in three-dimensional ultrasonic image data, and acquiring a plurality of velocity vectors at the corresponding position according to the multi-group of ultrasonic echo signals; then, according to the plurality of velocity component vectors, fluid velocity vectors of the target point at corresponding positions in the three-dimensional ultrasonic image data are obtained through synthesis.
As described above, in this embodiment, the volume plane ultrasonic echo signals may be used to calculate the fluid velocity vector of the target point, and in some embodiments of the present invention, based on one set of volume plane ultrasonic echo signals in multiple sets of volume plane ultrasonic echo signals, a velocity component vector at a position of the target point in the scan target is calculated, and multiple velocity component vectors at the position are obtained according to the multiple sets of volume plane ultrasonic echo signals.
In this embodiment, the process of calculating a velocity vector of a target point in a scanned target based on one of the multiple sets of bulk ultrasound echo signals may refer to one of the multiple calculation methods provided in the first method. For example, from a set of volume ultrasound echo signals, velocity vectors of a target point at corresponding positions are obtained by calculating movement displacement and movement direction of the target point within a preset time interval. The method for calculating the velocity component vector of the target point in this embodiment may use the method similar to speckle tracking described above, or may also use doppler ultrasound imaging method to obtain the velocity component vector of the target point in an ultrasound propagation direction, or may also obtain the blood flow velocity component vector of the target point based on the temporal gradient and the spatial gradient at the target point, and so on. Reference is made in detail to the preceding detailed explanation of the first mode, which is not repeated here.
When two angles exist in step S100, the magnitude and direction of the fluid velocity at all positions to be measured at a moment can be obtained through 2N transmissions; if there are three angles, 3N transmissions are required, and so on. Fig. 12(a) shows two different angle shots a1 and B1, after 2N shots, the velocity and magnitude of the spot position in the figure can be calculated by velocity fitting. The velocity fit is shown in FIG. 12 (b). VA and VB in fig. 12(B) are velocity component vectors of the target point at the corresponding position along the two ultrasonic wave propagation directions a1 and B1 in fig. 12(a), respectively, and a fluid velocity vector V of the target point at the corresponding position is obtained by spatial velocity synthesis. Under the condition that two ultrasonic wave propagation directions exist, image data obtained by each transmission can be repeatedly utilized, the velocity vector is calculated by using a Doppler imaging method, so that the time interval of the velocity and the direction of the whole field of fluid obtained twice is reduced, the minimum time interval of the two ultrasonic wave propagation directions is the time used for 2 times of transmission, the minimum time interval of the three ultrasonic wave propagation directions is the time used for 3 times of transmission, and the like. By using the method, the flow speed and the direction of the position of the whole place can be obtained at each moment.
When at least three ultrasonic wave propagation directions exist in step S100, at least three sets of beam echo signals for calculating at least three velocity vectors, where the corresponding at least three ultrasonic wave propagation directions are not in the same plane, can make the fluid velocity vector obtained by calculation closer to the velocity vector in the real three-dimensional space, which is hereinafter referred to as a constraint condition regarding the ultrasonic wave propagation directions.
For example, in the above step S100, the ultrasonic beams of the target emitter may be scanned in N (3. ltoreq.N) ultrasonic propagation directions, but in the step S400, for calculating the fluid velocity vectors of the above target points at the corresponding positions, the calculation is performed using N velocity components every time, where 3. ltoreq.n < N. That is, in the above step 100, may be: an ultrasound beam is emitted toward a scanning target along at least three ultrasound propagation directions, wherein adjacent at least three ultrasound propagation directions are not in the same plane. Then, in step S400, according to a process of calculating a velocity vector of a target point in a scanned target based on one of at least three sets of volume ultrasound echo signals, at least three blood flow velocity component vectors corresponding to the target point at the corresponding position in at least three sets of volume ultrasound echo signals received continuously are calculated respectively, and a fluid velocity vector of the target point at the corresponding position is obtained by synthesis according to the velocity vectors in the at least three ultrasound propagation directions.
For another example, in order to reduce the computation amount and the complexity of scanning and computation, in step S100, the ultrasound beams of the target emitter may be scanned along N (3 ≦ N) ultrasound propagation directions, but in step S400, when the fluid velocity vectors of the target points at corresponding positions are calculated, the calculation is performed using N velocity component vectors each time. That is, in the above step 100, may be: an ultrasound beam is emitted toward a scanning target along at least three ultrasound propagation directions, wherein the at least three ultrasound propagation directions are not in the same plane. Then, in step S400, according to the process of calculating a velocity component vector of the target point at the corresponding position in the scanning target based on one of the at least three sets of received ultrasonic echo signals, the velocity component vectors in all ultrasonic wave propagation directions corresponding to the target point at the corresponding position in the at least three sets of ultrasonic echo signals are respectively calculated, and the fluid velocity vector of the target point at the corresponding position is obtained by synthesis according to the velocity component vectors in all ultrasonic wave propagation directions.
In order to satisfy the constraint condition regarding the propagation direction of the ultrasonic wave, no matter in the implementation manner of "at least three adjacent propagation directions of the ultrasonic wave are not in the same plane" or "the at least three propagation directions of the ultrasonic wave are not in the same plane", the propagation directions of the ultrasonic wave can be obtained by adjusting the time delay of the transmitting array elements participating in the transmission of the ultrasonic beam and/or driving the transmitting array elements participating in the transmission of the ultrasonic beam to deflect so as to change the emission direction of the ultrasonic wave. For example, each linear array probe or each transmitting array element in a probe combination structure arranged in an array form is provided with corresponding driving control to uniformly adjust and drive the deflection angle or delay of each probe or transmitting array element in the probe combination structure, so that a scanning body formed by a volume ultrasonic beam output by the probe combination structure has different offset, and thus different ultrasonic propagation directions are obtained.
In some embodiments of the present invention, the instruction information may be generated by configuring a user-independent option on a display interface, or providing an option configuration key, etc. for obtaining the number of ultrasonic propagation directions selected by the user, or selecting the number of velocity component vectors used for synthesizing the fluid velocity vector in step S400; according to the instruction information, the number of ultrasonic wave propagation directions in the step S100 is adjusted, and the number of velocity components used for synthesizing the fluid velocity vector in the step S400 is determined according to the number of ultrasonic wave propagation directions, or the number of velocity components used for synthesizing the fluid velocity vector at the corresponding position of the target point in the step S400 is adjusted, so as to provide a more comfortable experience for the user and a more flexible information extraction interface.
In step S500, the 3D image processing module 11 implements the formation of a fluid velocity vector identifier from the fluid velocity vector information for marking the target point in the three-dimensional ultrasound image data, and obtains the volumetric image data 900 containing the fluid velocity vector identifier. The three-dimensional ultrasonic image data can be acquired in real time or non-real time, and if the three-dimensional ultrasonic image data is acquired in non-real time, the three-dimensional ultrasonic image data can be replayed, tentatively processed and the like. In addition, when the enhanced three-dimensional ultrasound image data of at least a portion of the scanning target is obtained by the gray-scale blood flow imaging technique in step S310, the corresponding gray-scale feature or fluid velocity information obtained by the gray-scale blood flow imaging technique may also be used for showing in the image displayed on the display device of the display screen. When step S500 is executed, the 3D image processing module 11 may segment the region of interest used for representing the fluid region in the enhanced three-dimensional ultrasound image data, obtain a cloud-shaped cluster region block, mark the cloud-shaped cluster region block in the three-dimensional ultrasound image data, and obtain volume image data including the cloud-shaped cluster. Or, in the method for dynamically displaying a cluster body shown in fig. 8(b), after step S310, step S510 is adopted, that is, the region of interest in the enhanced three-dimensional ultrasound image data for characterizing the fluid region is segmented to obtain a cloud-shaped cluster region block, the cloud-shaped cluster region block is marked in the three-dimensional ultrasound image data to form a cluster body, and the volume image data including the cluster body is obtained, where the specific implementation method of step S510 may refer to the relevant description of step S500.
In some embodiments of the present invention, before marking the fluid velocity vector information and/or the cloud-like cluster region block of the target point, the three-dimensional ultrasound image data may be converted into perspective-effect volume image data, so as to facilitate the conversion of the subsequent parallax image.
The three-dimensional ultrasonic image data can be converted into the volume image data with perspective effect in the following two ways:
firstly, different transparencies are hierarchically set for three-dimensional ultrasonic image data. By setting different transparencies, the information display of the interior of the scanning object (such as the blood vessel 930 in fig. 13 and 14) can be obtained when the scanning object is viewed from a certain angle (the observation angle in the subsequent parallax image conversion), mainly for displaying the fluid velocity vector information of the target point in the blood vessel 930, for example, the fluid velocity vector information of the target point marked in fig. 13 and 14 forms the fluid velocity vector identifier 920.
As shown in fig. 13(a), the three-dimensional ultrasound image data is sliced in parallel (710, 711, 712), each slice is set to a different transparency, or a plurality of slices are sequentially set to stepwise increasing transparencies. Fig. 13(a) characterizes the different transparencies by different cross-hatching. The transparency of the parallel sections (710, 711, 712) may be different or may be stepped in sequence. Alternatively, the transparency with stepwise gradient may be sequentially set for the plurality of slices, where the slice at the target position (i.e., the core observation position) is set to be low transparency, and then the transparency corresponding to the plurality of slices at two sides of the slice is set to be stepwise increased or the transparency corresponding to the plurality of slices at two sides of the slice is set to be relatively high transparency according to the positional relationship of the plurality of slices, so as to weaken the background image by the transparency setting and make the information display of the slice at the target position (i.e., the core observation position) more prominent. For example, in fig. 13(a), the transparency of the parallel section 711 is 20%, and then the transparency of the parallel sections 710 and 712 may be 50%.
Specifically, different transparencies of the three-dimensional ultrasound image data can be hierarchically set according to the observation perspectives of the two paths of parallax image data. The observation angle may be a viewpoint position corresponding to an arbitrary parallax number in the step of converting the volume image data into two paths of parallax image data, or may be two observation angles for shooting the volume image data being played.
As also shown in fig. 13(b), concentric spherical sections (721, 722) are made centering on the observation point for the three-dimensional ultrasound image data, and each section is set to a different transparency, or a plurality of sections are sequentially set to stepwise increasing transparencies. The observation point in this embodiment can be selected by the user, for example, a spatial center point where the three-dimensional ultrasound image data exists can be used as the observation point.
Of course, the step sizes of the plurality of parallel sections or concentric spherical sections in fig. 13(a) and 13(b) may be set as required, and the internal information of the scan target may be displayed layer by layer. Generally, when the perspective effect is converted, the observation angle of the parallax image conversion is also considered to be set for the transparency, so when the three-dimensional ultrasonic image data is hierarchically set with different transparencies, the different transparencies from the observation angle can be considered to be hierarchically set for displaying the internal information of the scanning target.
As shown in fig. 14, the three-dimensional ultrasound image data is subjected to tissue structure segmentation, and the tissue structure region obtained by the segmentation is set to different transparencies. 930 is a segment of a vessel image comprising a first layer of vessel wall tissue structure 931 and a second layer of vessel wall tissue structure 932, wherein the two layers of vessel wall tissue are distinguished by different transparencies, the different tissue structure regions being illustrated by different cross-hatching in fig. 14.
The two solutions presented in fig. 13 and 14 can also be used in combination with each other. The perspective effect volumetric image data 900 is shown in fig. 13 and 14.
Secondly, each frame of three-dimensional ultrasonic image data is converted into a three-dimensional perspective effect picture based on three-dimensional drawing software. The three-dimensional drawing software can comprise 3ds max software, or other software tools capable of showing a stereoscopic effect image, or similar 3D drawing software tools made by self. The perspective rendering method of the three-dimensional perspective effect map in the present embodiment may refer to the foregoing, for example, the perspective effect may be set for different tissue structures according to the result of tissue structure segmentation.
After the transformation of the volume image data with perspective effect is respectively carried out on each frame of three-dimensional ultrasonic image data, the fluid velocity vector information of the target point is marked on each frame of image in sequence according to the first mode or the second mode. For example, based on the second mode, converting the three-dimensional ultrasonic image data into volume image data with perspective effect, and marking fluid velocity vector information of the target point changing along with time in the volume image data to form the fluid velocity vector identifier which can change along with time; and/or marking a time-varying cloud-like cluster region block in the volumetric image data.
Based on the above-described embodiment, the generation of the volume-related image data may specifically be,
for example, different transparencies are hierarchically set for each frame of three-dimensional ultrasonic image data, fluid velocity vector information of a target point at a corresponding position is marked in each frame of three-dimensional ultrasonic image data, a single-frame volume image containing a fluid velocity vector identifier is obtained, and a plurality of frame volume images continuous along with time constitute volume image data, so that the fluid velocity vector identifier presents a flow-like visual effect changing along with time when the volume image data is displayed, that is, the flow-like fluid velocity vector identifier changing along with time can be observed in a 3D ultrasonic image when observed by human eyes. And/or setting different transparencies hierarchically for each frame of three-dimensional ultrasonic image data, marking a cloud-shaped cluster body region block in each frame of three-dimensional ultrasonic image data to form a cluster body, obtaining a single frame body image containing the cloud-shaped cluster body, and forming the volume image data by a plurality of frame body images which are continuous along with time, so that the cluster body presents a rolling visual effect along with time change when the volume image data is displayed, namely, the cluster body which is rolling along with time change can be observed in the 3D ultrasonic image when the human eye observes.
For example, each frame of three-dimensional ultrasonic image data is converted into a three-dimensional perspective effect image based on three-dimensional drawing software, fluid velocity vector information at the corresponding position of a target point is marked in each three-dimensional effect image, a single frame image containing a fluid velocity vector identifier is obtained, and a plurality of frame images which are continuous along with time form the volume image data, so that the fluid velocity vector identifier presents a flowing visual effect which changes along with time when the volume image data is displayed. And/or converting each frame of three-dimensional ultrasonic image data into a pair of three-dimensional perspective effect images based on three-dimensional drawing software, marking cloud-shaped cluster region blocks in each pair of three-dimensional effect images to obtain single frame images containing cloud-shaped clusters, and forming the volume image data by continuous multi-frame images along with time, so that the clusters have a rolling visual effect along with time change when the volume image data are displayed.
Or, the three-dimensional ultrasonic image data is displayed as a dynamic space stereo image based on a true three-dimensional stereo image display technology, and fluid velocity vector information of a target point changing along with time is marked in the space stereo image to obtain volume image data, so that the fluid velocity vector mark presents a flowing visual effect changing along with time when the volume image data is displayed. And/or displaying the three-dimensional ultrasonic image data into a dynamic space stereo image based on a true three-dimensional image display technology, marking a cloud-shaped cluster area block which changes along with time in the space stereo image, and obtaining the volume image data, so that a cluster body presents a rolling-shaped visual effect which changes along with time when the volume image data is displayed.
The true three-dimensional image display technology is a technology for displaying three-dimensional ultrasonic image data in a certain entity space range to form a true space stereo image of a scanned target based on a holographic display technology or a volumetric three-dimensional display technology.
The holographic display technology herein mainly includes conventional holograms (transmission type holographic display image, reflection type holographic display image, image plane type holographic display image, rainbow type holographic display image, synthetic type holographic display image, etc.) and Computer holograms (CGH). Computer holograms float in the air and have a wide color gamut, in which the object used to produce the hologram needs to be generated in a computer to describe a mathematical model, and the physical interference of the light waves is replaced by a calculation step, and in each step, the intensity pattern in the CGH model can be determined, and the pattern can be output to a reconfigurable device that remodulates and reconstructs the output of the light wave information. In popular terms, the CGH obtains an interference pattern of a computer graphic (virtual object) through computer operation, and replaces the interference process of traditional hologram object light wave recording; the diffraction process of hologram reconstruction is not changed in principle, but equipment capable of reconfiguring light wave information is added, so that holographic display of different computer static and dynamic graphics is realized.
Based on holographic display technology, in some embodiments of the present invention, as shown in fig. 15, the spatial stereoscopic display device 8 includes: 360 holographic phantom imaging system, the system includes light source 820, controller 830, spectroscope 810, light source 820 can adopt the shot light, controller 830 includes one or more processors, receive the three-dimensional ultrasonic image data from data processing module 9 (or image processing module 7 therein) output through the communication interface, and obtain the interference pattern of the computer figure (virtual object) after processing, output the interference image to spectroscope 810, and show this interference pattern through the light that light source 810 projects on spectroscope 810, form the space stereoscopic image of the scanning target. Here, the beam splitter 810 may be a special lens, or a four-sided pyramid, etc.
In addition to the 360 holographic phantom imaging system described above, the spatial stereoscopic display device 8 may also be based on a holographic projection apparatus, for example, by forming a stereoscopic image on air, special glasses, a fog screen, or the like. Therefore, the spatial stereoscopic display device 8 may also be one of an air holographic projection apparatus, a laser beam holographic projection apparatus, a holographic projection apparatus having a 360-degree holographic display screen (whose principle is to project an image on a mirror rotating at a high speed so as to realize a hologram), a fog screen stereoscopic imaging system, and the like.
The air holographic projection equipment forms a spatial stereo image by projecting the interference pattern of the computer graphics (virtual object) obtained in the embodiment on an airflow wall, and can form a holographic image with strong stereoscopic impression due to unbalanced vibration of water molecules forming water vapor. Thus, this embodiment adds a device for forming the airflow wall to the embodiment shown in fig. 15.
The laser beam hologram projection apparatus is a hologram projection system that projects a solid object using a laser beam, and a spatial stereoscopic image is obtained by projecting an interference pattern of computer graphics (virtual object) obtained in the above-described embodiment by the laser beam. In the embodiment, when oxygen and nitrogen are dispersed in the air, the mixed gas of the oxygen and the nitrogen becomes a glowing substance, and a holographic image is formed in the air through continuous small explosion.
The fog curtain stereo imaging system further includes an atomizing device on the basis of the embodiment shown in fig. 15, so as to form a water fog wall, and the interference pattern of the computer graphics (virtual object) obtained in the above embodiment is used as a projection screen to form an holographic image on the water fog wall by laser light, so as to obtain a spatial stereo image. The method comprises the steps of forming an image in the air by laser light and particles in the air, using atomization equipment to generate an artificial fog-spraying wall, replacing a traditional projection screen with the fog-spraying wall, manufacturing a screen capable of generating plane fog by combining aerodynamics, and projecting a projector to the fog-spraying wall to form a holographic image.
The above only describes several devices of holographic display technology, and may specifically participate in the related device structures existing on the market, and of course, the present invention is not limited to the above devices or systems based on holographic display technology, and may also adopt holographic display devices or technologies that may exist in the future.
However, in the case of the volumetric three-dimensional display technology, which is a display object composed of voxel particles instead of molecular particles is manufactured by using a human-own special visual mechanism, the true existence of the voxels can be touched in addition to the shape of the light wave. It is formed by exciting a substance located in a transparent display volume by a suitable means, using the resulting absorption or scattering of visible radiation, and when a number of orientations of the substance in the volume are excited, it is possible to form a three-dimensional image in three-dimensional space consisting of a number of discrete voxels. The following two are currently included.
(1) The rotator scanning technology is mainly used for displaying dynamic objects. In this technique, a series of two-dimensional images are projected onto a rotating or moving screen while the screen is in motion at a rate imperceptible to an observer, forming a three-dimensional object in the human eye due to human persistence of vision. Therefore, a display system using such a stereoscopic display technique can realize true three-dimensional display (360 ° visible) of an image. Light beams of different colors in the system are projected onto a display medium through the optical deflector, so that the medium is rich in colors. At the same time, the display medium enables the light beam to produce discrete visible light points, which are voxels, corresponding to any point in the three-dimensional image. A set of voxels is used to create an image that can be viewed by an observer from an arbitrary viewpoint. The imaging space in a display device based on the rotator scanning technique may be generated by rotation or translation of the screen. Voxels are activated on the emission surface as the screen is swept through the imaging space. The system comprises: laser system, computer control system, rotary display system and other subsystems.
Based on the volumetric three-dimensional display technology, in some embodiments of the present invention, as shown in fig. 16, the spatial stereoscopic display device 8 includes: the display device includes a voxel solid portion 811, a rotation motor 812, a processor 813, an optical scanner 819, and a laser 814, where the voxel solid portion 811 may be a rotating structure that can accommodate a rotating surface, the rotating surface may be a spiral surface, and the voxel solid portion 811 has a medium that can be displayed by laser projection. The processor 813 controls the rotating motor 812 to drive a rotating surface in the voxel solid portion 811 to rotate at a high speed, and then the processor 813 controls the laser to generate three beams of R/G/B laser, and the three beams of R/G/B laser are converged into a beam of chromaticity light to be projected onto the rotating surface in the voxel solid portion 811 through the optical scanner 819 to generate a plurality of color bright spots, and when the rotating speed is fast enough, a plurality of voxels are generated in the voxel solid portion 811, and the converged plurality of voxels can form a suspended spatial stereo image.
In other embodiments of the invention, it is also possible that in the structural framework shown in fig. 16, the plane of rotation may be an upright projection screen located inside the solid body 811 of voxels, the screen having a rotation frequency of up to 730rpm and being made of a very thin translucent plastic. When a 3D object image is to be displayed, the processor 813 divides the three-dimensional ultrasound image data generated by the software into a plurality of sectional views (i.e., when the image is rotated along the Z axis, a longitudinal section perpendicular to the X-Y plane is not taken every X degrees (e.g., 2 degrees) on average, and when the image is not taken every X degrees, the upright projection screen is projected on the upright projection screen, and when the upright projection screen is rotated at a high speed and a plurality of sectional views are projected on the upright projection screen in turn at a high speed, a natural 3D image can be formed in all directions.
As shown in fig. 17, the spatial stereoscopic display device 8 includes: the three-dimensional light-emitting Diode (DLP) projection system comprises a voxel solid part 811 with an upright projection screen 816, a rotating motor 812, a processor 813, a laser 814 and a light-emitting array 817, wherein a plurality of light beam outlets 815 are arranged on the light-emitting array 817, the light-emitting array 817 can adopt three Digital Micro-electromechanical system (MEMS) -based DLP optical chips, each chip is provided with a high-speed light-emitting array consisting of more than one million Digital Micro-Mirror image devices (Digital Micro-Mirror), and the three DLP chips are respectively responsible for R/G/B three-color images and are combined into one image. The processor 813 controls the rotating motor 812 to drive the upright projection screen 816 to rotate at a high speed, and then the processor 813 controls the laser to generate three beams of R/G/B laser, and the three beams of laser are input to the light emitting array 817, and the synthesized light beam is projected onto the upright projection screen 816 rotating at a high speed through the light emitting array 817 (wherein the light beam can also be projected onto the upright projection screen 816 by means of the reflection of the relay optical lens), so as to generate a plurality of display voxels, and the plurality of display voxels can be converged to form a spatial stereo image suspended in the voxel solid portion 811.
(2) The static volume imaging technology is based on a frequency up-conversion technology to form a three-dimensional image, and the frequency up-conversion three-dimensional display is that a fluorescence is spontaneously radiated after a plurality of photons are absorbed by an imaging space medium, so that visible pixel points are generated. The basic principle is that two beams of infrared laser perpendicular to each other are used for acting on an up-conversion material in a crossed manner, through two times of resonance absorption of the up-conversion material, a luminescence center electron is excited to a high excitation energy level, and then the luminescence center electron makes a transition to a lower energy level to possibly generate emission of visible light, one point in the space of the up-conversion material is a luminous point for emitting light, if the crossed point of the two beams of laser is subjected to addressing scanning of a three-dimensional space in the up-conversion material according to a certain track, the scanned place of the crossed point of the two beams of laser should be a bright band capable of emitting visible fluorescence, and a three-dimensional stereo pattern identical to the movement track of the crossed point of the laser can be. The display method can enable the naked eyes to see 360-degree omnibearing visible three-dimensional images. The static volume imaging technique sets a display medium consisting of a stack of a plurality of liquid crystal panels arranged at intervals (for example, each panel has a resolution of 1024 × 748, and the panel-to-panel interval is about 5mm) in the voxel solid portion 811 in each of the above embodiments; the liquid crystal pixels of these specially made liquid crystal panels have special electro-optically controlled properties, which when energized become parallel to the beam propagation as a louver, allowing the beam illuminating the spot to pass transparently, and when energized at 0, become opaque, allowing diffuse reflection of the illuminating beam, forming a voxel present in the stack of liquid crystal panels, in which case the rotation motor of fig. 16 and 17 can be eliminated. Specifically, the Depth feeling that the liquid crystal screens arranged at intervals can express can be enlarged through a display technology of three-dimensional Depth Anti-Aliasing (3D Depth Anti-Aliasing), so that the display resolution of 1024 × 748 × 20 physical space can be up to 1024 × 748 × 608; like the embodiment shown in fig. 17, this embodiment may also employ DLP imaging technology.
Similarly, the above only describes several devices based on the volumetric three-dimensional display technology, and may specifically participate in the related device structures existing in the market, and of course, the present invention is not limited to the above devices or systems based on the volumetric three-dimensional display technology, and may also adopt the volumetric three-dimensional display technology that may exist in the future.
In the embodiment of the present invention, a spatial stereo image of a scan target may be displayed in a certain space or an arbitrary space, or a spatial stereo image of a scan target may be presented based on a display medium such as air, a mirror, a fog screen, a rotating or stationary voxel, and then fluid velocity vector information of a target point varying with time is marked in the spatial stereo image to obtain volume image data. For the marking mode of the actually displayed space stereo image, the position of the target point in the space stereo image can be obtained according to the position conversion of the target point in the volume image data based on the image mapping relation between the volume image data and the imaging range of the space stereo image, so that the fluid velocity vector information of the target point changing along with time in the space stereo image can be marked.
Based on the various generation methods of the volume image data described above, the blood flow velocity vector information of the target point may be marked in the volume image data in the following manner.
In some embodiments of the present invention, if the fluid velocity vector information of the target point obtained by the first mode is marked on the volume image data 900, as shown in fig. 18, 910 represents a part of a schematic diagram of a blood vessel, in which the fluid velocity vector information of the target point is marked by a cube with an arrow, wherein the direction of the arrow represents the direction of the fluid velocity vector at the time of the target point, and the length of the arrow can be used to represent the magnitude of the fluid velocity vector at the time of the target point. In fig. 18, an arrow 922 shown by a solid line indicates fluid velocity vector information of a target point at the present time, and an arrow 921 shown by a dotted line indicates fluid velocity vector information of a target point at the previous time. In fig. 18, the image effect of the core tissue structure of the volume image data is shown, in which the object located close to the observation point is large, and the object located far from the observation point is small.
Further, in another embodiment of the present invention, marking fluid velocity vector information of a target point obtained by using the second mode on the volume image data, that is, fluid velocity vector information of the target point includes: continuously moving the target point to a corresponding position in the three-dimensional ultrasonic image data to sequentially and correspondingly obtain a fluid velocity vector; then, in step S500, the fluid velocity vector identifier that is flowing with time is formed by the fluid velocity vectors obtained when the marked target point moves continuously to the corresponding position. As shown in fig. 19, to exhibit the stereoscopic display effect, the object located close to the observation point is large, and the object located far from the observation point is small. The fluid velocity vector information of the target point is marked in fig. 19 with a sphere 940 with an arrow, wherein the direction of the arrow indicates the direction of the fluid velocity vector at the time of the target point, and the length of the arrow can be used to indicate the magnitude of the fluid velocity vector at the time of the target point. 930 is a blood vessel image, and in fig. 19, a sphere 941 with an arrow shown by a solid line represents fluid velocity vector information of a target point at the present time, and a sphere 942 with an arrow shown by a dotted line represents fluid velocity vector information of a target point at the previous time. If the fluid velocity vector information of the target point is obtained by the second mode, a marker 940 flowing with time is superimposed on the volume image data.
As shown in fig. 19, 930 is a segment of a vessel image comprising a first layer of vessel wall tissue structure 931 and a second layer of vessel wall tissue structure 932, wherein the two layers of vessel wall tissue are distinguished by different colors. In addition, as shown in fig. 20, the blood flow velocity vectors of the target points in the two groups of blood vessels 960 and 970 are marked by arrowed spheres 973 and 962, respectively, and the stereo image regions 971, 972, 961 of other tissue structures are marked with other colors for distinction. The difference in color labeling within a region is characterized in fig. 20 by the different type of fill-in hatching within that region. Therefore, in order to embody the stereoscopic imaging effect and distinguish display information, the volume image data includes stereoscopic image regions for presenting each tissue structure according to the anatomical tissue structure and the hierarchical relationship, and the color parameters of each stereoscopic image region are configured to distinguish and display the stereoscopic image regions from adjacent stereoscopic image regions.
It is also possible to highlight the fluid velocity vector information in the volumetric image data, the contour lines of the volumetric image regions for the respective tissue structures can be displayed to avoid overlaying or obscuring the fluid velocity vector identification. For example, as shown in fig. 18, for a segment of blood vessel 910, an outer contour thereof, and/or some cross-sectional contour thereof, may be displayed to indicate an image region where the fluid velocity vector information indicator 920 is located, so as to more prominently display the fluid velocity vector indicator 920, and more intuitively and clearly show the fluid velocity vector indicator 920.
As shown in fig. 21, when the enhanced three-dimensional ultrasound image data of at least a portion of the scanning target is obtained by the gray-scale blood flow imaging technique in step S300 in the above embodiment, then the corresponding gray-scale feature or speed information obtained by the gray-scale blood flow imaging technique may also be used for output display in the 3D ultrasound image. For example, whether the enhanced three-dimensional ultrasound image data is processed as a three-dimensional data volume as a whole or as a plurality of two-dimensional images, respectively, a corresponding cluster region block may be obtained in each frame of the enhanced three-dimensional ultrasound image data in the following manner. When the step S500 is executed, firstly, an interested region used for representing a fluid region in one or more frames of enhanced three-dimensional ultrasonic image data is segmented, and a cloud-shaped cluster region block is obtained; and marking the cloud-shaped cluster body region block in the three-dimensional ultrasonic image data to form a cluster body, and obtaining volume image data containing the cluster body, so that the cluster body which changes along with time and is in a rolling shape is presented in the 3D ultrasonic image. In fig. 21(a), clusters at different times are sequentially represented by different line types 950, 951, 952, and it can be seen that the clusters roll over time, and the overall rolling behavior of the fluid is vividly represented, giving the observer an omnidirectional observation angle. Furthermore, in the present embodiment, the region of interest may be segmented based on image grayscale attributes. Fig. 21(b) is an effect diagram in which fluid velocity vector information of a target point marked with an arrowed sphere 940 is superimposed on fig. 21 (a).
In addition, color information may be superimposed on the cloud-like cluster region block in order to more clearly display the cluster. For example, when the blood vessel wall is red, color information such as white or orange is superimposed on the cluster region block indicating the blood flow so as to distinguish the blood vessel wall from the cluster region block. Or, in the step of obtaining cloud-shaped cluster region blocks in the region of interest used for representing the fluid region in the segmentation-enhanced three-dimensional ultrasound image data, obtaining cluster region blocks with different gray scale characteristics based on the region of interest used for representing the fluid region in the image gray scale segmentation-enhanced three-dimensional ultrasound image data, and for the cluster region blocks in the three-dimensional space region, the gray scale characteristics may be an average value of gray scale values of spatial points in the whole region block, a maximum value or a minimum value of gray scale values of spatial points in the whole region block, and the like, so as to represent numerical values or a set of attribute values of gray scale characteristics of the whole region block. And in the step of displaying the cloud-shaped cluster region blocks in the displayed volume image data, rendering different gray-scale feature cluster region blocks through different colors. For example, if the cluster region blocks obtained by segmentation are classified into 0 to 20 classes according to the gray feature attributes, each corresponding class is marked with a color for display, or the 0 to 20 classes are respectively marked with colors of different purities in the same color.
As in fig. 21(c), cluster region blocks 953 and 954 may be marked with different colors to represent their gradation characteristics due to velocity reflection. Of course, as shown in fig. 21(c), for the same cloud-shaped cluster region blocks 953 and 954, region blocks with different gray scales may be obtained according to the above-mentioned dividing manner based on the image gray scale, and different colors may be superimposed according to the gray scale change of different region bodies in the cluster region blocks for rendering, and in fig. 21(c), different region bodies in the cluster region blocks 953 and 954 may be filled with different cross-sectional lines to represent that different colors are superimposed for rendering. For example, the different regions in the cluster region block are classified according to the gray feature attributes and divided into a plurality of categories, and then each corresponding category is marked with a color hue (or color tone) to display a color, or the plurality of categories are respectively marked with colors of different purities under the same color hue (or color tone) to display.
Of course, it is also possible to superimpose correspondingly set color information on the cluster region block according to the velocity information of the fluid region characterized by the cluster region block. For example, as in fig. 21(c), cluster region chunks 953 and 954 are labeled with different colors to characterize the velocity information of their corresponding fluid regions.
Based on the above display effect that can display the cloud-like cluster region block, the present invention actually provides another display mode, such as fig. 21 and 22, in which a mode switching command input by a user can be used to switch from the current display mode to the display mode obtained by displaying the volume image data containing the cluster so that the cluster presents a time-varying roll-like visual effect in the output display.
As shown in fig. 18 to 22, when the step S500 of marking the fluid velocity vector information of the target point in the three-dimensional ultrasound image data is executed, one or a combination of two or more parameters selected from a color, a stereoscopic shape, and a transparency of the fluid velocity vector identifier (920, 940, 973, 962, 981, 982) is configured to be displayed in a manner of distinguishing from the background image portion in the volume image data (i.e., the stereoscopic image region of other tissue structures in the volume image data, such as a blood vessel wall region, a lung region, and the like). For example, if the blood vessel wall is green, the fluid velocity vector marker therein is red, or the blood vessel wall of the artery and the fluid velocity vector marker both adopt red color system, and the blood vessel wall of the vein and the fluid velocity vector marker both adopt green color system.
Likewise, different velocity levels and directions of displaying fluid velocity vector information may also be distinguished by a combination of parameters of one or more of color, solid shape, transparency, and fluid velocity vector identification (920, 940, 973, 962, 981, 982) configured to mark fluid velocity vector information in volumetric image data. For example, the intra-arterial fluid velocity vector designation uses stage colors in the red color system to represent different velocity levels, while the venous fluid velocity vector designation uses stage colors in the green color system to represent different velocity levels. Dark red or dark green indicates a fast speed, and light green or light red indicates a slow speed. The matching method of the colors can be found in the related coloristic knowledge and is not listed in detail here.
Further, for each of the embodiments described above, the fluid velocity vector identification includes a volumetric marker with an arrow or with a direction guide. For example, a cube with an arrow in fig. 18, a sphere with an arrow in fig. 19, or a prism with an arrow, a cone, the tip of the cone points to the direction representing the fluid velocity vector, or a small head of a truncated cone may be used as the direction guide, or the direction of the long diagonal side in the three-dimensional marker with a rhombic vertical section may be used to represent the direction of the fluid velocity vector, or both ends of the long axis of an ellipsoid may be used as the direction guide to represent the direction of the fluid velocity vector, and so on. Therefore, in order to more intuitively understand the fluid velocity vector information of the target point, the direction of the fluid velocity vector may be represented by an arrow or a direction guide of the solid marker, and the magnitude of the fluid velocity vector may be represented by the volume of the solid marker.
Alternatively, the fluid velocity vector indicator may also be a three-dimensional marker without an arrow or with a direction guide, such as a three-dimensional structure in any shape of a sphere, an ellipsoid, a cube, a cuboid, and the like. Therefore, in order to more intuitively know the fluid velocity vector information of the target point, the magnitude of the fluid velocity vector can be represented by the rotation speed or the volume of the three-dimensional marker, and the direction of the fluid velocity vector can be displayed by moving the three-dimensional marker with time, for example, the fluid velocity vector of the target point can be calculated by adopting the second mode, so that the fluid velocity vector identification in a flowing state with time can be obtained. The rotational speed or volume size of the volumetric marker is hierarchically related to the size of the fluid velocity vector, facilitating the marking on the volumetric image data or the three-dimensional ultrasound image data. The rotating directions can be the same or different for all the three-dimensional markers, and the rotating speed can be recognized by human eyes, so that the human eyes can observe the rotation of the three-dimensional markers, asymmetric three-dimensional markers or three-dimensional markers with marks can be adopted.
Alternatively, the rotational speed of the stereo marker may be used to represent the magnitude of the fluid velocity vector, while the arrow direction may be used to characterize the direction of the fluid velocity vector. Therefore, the combination of the above various representations of the magnitude or direction of the fluid velocity vector is not limited in the present invention, and the magnitude of the fluid velocity vector may be represented by the volume magnitude or the rotational speed of the solid marker used to mark the target point fluid velocity vector, and/or the direction of the fluid velocity vector may be characterized by the pointing of an arrow on the solid marker, the pointing of a direction guide, or the movement of the solid marker with time.
In some embodiments of the present invention, superimposing fluid velocity vector information of the target point obtained by using the second mode on the volume image data, that is, the fluid velocity vector information of the target point includes: continuously moving the target point to a corresponding position in the three-dimensional ultrasonic image data to sequentially and correspondingly obtain a fluid velocity vector; then, in step S500, the same target point may be sequentially moved across multiple corresponding positions (e.g., two or more corresponding positions) in the three-dimensional ultrasound image data by the associated markers to form a motion path trajectory of the target point, so as to display the motion path trajectory during the output display. In fig. 22, the associated mark for displaying the locus of the movement stroke includes an elongated cylinder, a sectional elongated cylinder, a coma tail mark, or the like. In fig. 22, to show the stereoscopic display effect, objects located close to the observation point are large, and objects located far from the observation point are small. In fig. 22, 930 is a segment of blood vessel image, and a fluid velocity vector identifier (an arrowed sphere 981 or a sphere 982) for marking blood flow velocity vector information of a target point sequentially moves across the same target point through an elongated cylinder or a segmented elongated cylinder 991 from an initial position of the fluid velocity vector identifier to a plurality of corresponding positions in the volume image data to form a motion forming track, so that an observer can integrally know the motion mode of the target point. In addition, another way of displaying the trajectory is shown in fig. 22, for example, a comet tail mark 992 is formed by superimposing certain color information in a continuous area range from the initial position of the fluid velocity vector mark to a plurality of corresponding positions in the volume image data after the same target point is continuously moved, and when the observer observes the trajectory of the target point, a long tail is dragged after one fluid velocity vector mark 982, similar to the tail of comet.
In order to facilitate highlighting the motion trajectory in the volume image data, in some embodiments of the present invention, the method further includes:
firstly, obtaining the marking information which is input by a user and related to the related mark, and generating a selection instruction, wherein the marking information comprises: information such as the shape of the mark of the associated mark or the shape and color of the mark of the connection line; and then, configuring the related mark related parameters of the motion travel track displayed in the display image according to the marking information selected in the selection instruction.
The color herein includes any color obtained by changing information of hue (hue), saturation (purity), contrast, transparency, etc., and the shape of the aforementioned mark may be in various forms, and may be any mark that can describe direction, such as a slender cylinder, a sectional slender cylinder, a comet tail mark, etc.
Furthermore, based on the above display effect of the movement trace of the target point, the present invention actually provides another display mode, as shown in fig. 22, wherein the display mode obtained by the step of forming the movement trace of the target point by sequentially moving the associated markers across the same target point to a plurality of corresponding positions in the three-dimensional ultrasound image data in sequence can be switched from the current display mode to the display mode for displaying the movement trace of the target point by a mode switching command input by the user.
In addition, the target point that can depict the motion travel track may be single or multiple, and the initial position may be obtained through an instruction for inputting, for example, a distribution density instruction input by a user, according to which the target point is randomly selected within the scan target; or acquiring a marking position instruction input by a user, and acquiring the target point according to the marking position instruction.
However, in step S500, if the three-dimensional ultrasound image data is displayed as a dynamic spatial stereo image based on the true three-dimensional stereo image display technology, the manner of marking the fluid velocity vector information of the target point changing with time in the spatial stereo image, for example, how to configure the color, the shape of the mark, and the like, can be referred to the method for marking the fluid velocity vector information of the target point in the volumetric image data, and will not be described again here. Certainly, the method for displaying three-dimensional ultrasound image data as a dynamic spatial stereo image based on a true three-dimensional stereo image display technology, and marking fluid velocity vector information of a target point changing with time in the spatial stereo image to obtain the volume image data may further include the following technical solutions:
marking fluid velocity vector information of a target point at a corresponding position in each frame of three-dimensional ultrasonic image data, obtaining a single frame of volume image containing a fluid velocity vector mark, and forming volume image data which can be used for displaying by a true three-dimensional image display technology through multiple frame of volume images continuous along with time.
In step S600, the parallax image generation module converts the volume image data into two paths of parallax image data.
For example, in some embodiments of the present invention, as shown in fig. 23, the two paths of parallax image data are obtained by extracting temporally adjacent first time phase volume images and second time phase volume images in the volume image data 900, generating one path of parallax image data with an arbitrary parallax number N according to the first time phase volume images, and generating the other path of parallax image data with the same parallax number according to the second time phase volume images. For example, the first time phase volume image and the second time phase volume image may be converted into two paths of parallax image data according to 9 parallaxes, where each path of parallax image data includes 9 parallax images. For example, the volume image of the first time phase and the volume image of the second time phase are converted into two paths of parallax image data according to 2 parallaxes, and each path of parallax image data includes 2 parallax images. The arbitrary parallax number may be a natural number equal to or greater than 1, in which the volume images for each time phase are moved one by one to the corresponding viewpoint positions at a predetermined parallax angle. And outputting and displaying the two paths of parallax image data according to the moving sequence of the time phase and the viewpoint position when outputting the corresponding parallax image data, for example, outputting a plurality of parallax images obtained according to the moving sequence of the viewpoint position according to the volume image of the first time phase, then outputting a plurality of parallax images obtained according to the moving sequence of the viewpoint position according to the volume image of the second time phase after the time, and so on when outputting the two paths of parallax image data, sequentially outputting a plurality of parallax images obtained according to the moving sequence of the viewpoint position and respectively and correspondingly generated according to the continuous multi-frame volume images.
For another example, in some embodiments of the present invention, as shown in fig. 24, the volume image data is played, two observation viewing angles are established by simulating left and right eyes of a human, and the played volume image data is photographed at the two observation viewing angles, so as to obtain the two paths of parallax image data. And shooting each frame in the volume image data through two observation visual angles respectively to convert the frame into two paths of parallax image data. In the process of playing and shooting the volume image data, referring to the effect shown in fig. 26, the played volume image data 900 is displayed on the display 901, and then the position of the light source and the positions of the 1 st virtual camera and the 2 nd virtual camera are located to perform shooting of two observation angles, so as to obtain the two paths of parallax image data for output display on the display screen display device, so that human eyes can observe a 3D ultrasonic image. The display 901 may be a flat panel display at the image processing end, or the above display screen display device, and of course, the process of fig. 26 may also be executed only inside the background host, and is not displayed.
The method for converting the volume image data into the two paths of parallax image data can utilize a software program to perform algorithm programming to realize the function of the parallax image generation module, for example, three-dimensional ultrasonic image data or the volume image data can be converted into the two paths of image data through software programming generation.
Of course, the conversion of the volume image data into two paths of parallax image data can also be realized by adding hardware and combining a software program. For example, as shown in fig. 25, the 3D image processing module marks fluid velocity vector information of a target point over time in three-dimensional ultrasound image data, and obtains the volume image data containing fluid velocity vector identification; displaying the volume image data as a dynamic space stereo image based on a true three-dimensional stereo image display technology by using a space stereo display device, wherein the space stereo display device comprises one of a holographic display device based on the holographic display technology and a volume pixel display device based on the volume three-dimensional display technology; the display can be real-time acquisition display or non-real-time display, and in the non-real-time display mode, the three-dimensional ultrasonic image data acquired in a period of time can be displayed, and play functions such as pause, playback, fast forward and the like can be realized. Then, the parallax image generating module 12 includes a first camera 841 and a second camera 842, and the first camera 841 and the second camera 842 respectively shoot the dynamic spatial stereo image to obtain the two paths of parallax image data. The first image pickup device 841 and the second image pickup device 842 may be any image pickup apparatus such as an optical camera, an infrared camera, or the like.
In step S700, the display screen display device 8 is used to output and display the two paths of parallax image data so as to obtain the display effect of the 3D ultrasound image when the human eye observes the parallax image data. The display screen display device 8 may be based on glasses-type 3D display technology or naked-eye type 3D display technology. For example, based on glasses-like 3D display technology, the display screen display device 8 may include a display screen and wearable glasses for receiving and displaying the two paths of parallax image data. The 3D display technology of glasses is realized mainly by using special glasses of optical principle. Glasses 3D applied to the market at present mainly comprises a shutter type 3D and a polarization type 3D from the technical aspect, and mainly comprises a passive viewing mode and an active viewing mode from the viewing mode. The active viewing type 3D glasses display a 3D effect by using active operation of the glasses, and there are two types of the dual-display type 3D glasses and the liquid crystal type 3D glasses. (1) The principle of the dual-display 3D glasses is to use two sets of small displays disposed in left and right glasses to display left and right images respectively to form a 3D effect. (2) Liquid crystal type 3D glasses, which are composed of active liquid crystal lenses, the principle is that the electric field is used to change the transparent state of the liquid crystal, the sight lines of the left and right eyes are alternatively shielded at the frequency of tens of times per second, the pictures of the left and right eyes are only required to be alternately displayed during playing, then the liquid crystal type 3D glasses and the pictures synchronously work by using a synchronous signal, when the picture of the left eye is played, the picture of the right eye is blackened, and when the picture of the right eye is played, the picture of the left eye is blackened, and finally, a 3D effect is formed, but the brightness of the pictures can be influenced by the. The two paths of parallax image data are actually images respectively entering left and right eyes of a human being, and as for how to output and display the two paths of parallax image data to obtain a glasses type 3D display effect, reference may be made to related prior art, which is not described herein in detail.
Also for example, based on the naked-eye 3D display technology, the display screen display device 8 may include a naked-eye 3D display screen for receiving and displaying the two paths of parallax image data.
The naked-eye type 3D display technology combines the latest panel manufacturing technology and engine software technology at present, and on one hand, in the aspect of production and manufacturing, a panoramic image (Integral Imaging) mode that a lenticular lens is configured in front of a liquid crystal panel is adopted for display, namely, on the same screen, 3D display is realized by display of a divided area (space multifunctional naked-eye 3D technology) and display of cutting time (time-sharing multifunctional naked-eye 3D technology). On the other hand, in image display, the parallax between the left and right eyes of an existing 2D image and 3D image is converted into a 9-parallax 3D image by a computer image processing technique. From the current naked-eye type 3D Display technology forms, there are optical Barrier type (Barrier, also called light shielding type, Parallax Barrier (Parallax Barrier), Parallax Barrier (Parallax Barriers), etc.), Lenticular Lens (Lenticular Lens, also called Lenticular Lens technology or micro-Lenticular Lens technology, which is a technology that a special precise Lenticular Lens screen is added on a liquid crystal panel to independently send a coded 3D image to the left and right eyes of a person, so that the 3D can be experienced by naked eyes, and simultaneously, 2D is compatible, which has the greatest advantage over the optical Barrier technology that the brightness is not affected, but the observation view angle width is slightly smaller), Multi-Layer Display (Multi Layer Display, this MLD technology can realize naked-eye viewing of 3D text and 3D image by two liquid crystal panels overlapped at a certain interval), Depth-fused 3D Display (Depth-fused 3D, the two liquid crystal panels are overlapped together in front and at the back, images of the foreground and the background are displayed on the front liquid crystal panel and the back liquid crystal panel respectively at different brightness, and the depth of field effect is presented by the difference of the depth of the entity. ) And a Directional light source (Directional Backlight, the method is that two sets of fast-response LCD panels and drivers are matched, so that 3D images enter the left and right eyes of a viewer in a sequencing mode, and due to the fact that parallax exists between the left and right images which are interchanged, the human eyes can feel a 3D effect. ) And the like. The two paths of parallax image data are actually images respectively entering left and right eyes of a human body, and as for how to output and display the two paths of parallax image data to obtain a naked-eye 3D display effect, reference can be made to related prior art, and no description is made here. As shown in fig. 27(b), when the image displayed on the display screen display device 8 is viewed with naked eyes, the obtained 3D ultrasound image shows a visual effect diagram of the indicator of the flow-like blood flow velocity vector, and as shown in fig. 27(a), when the image displayed on the display screen display device 8 is viewed with naked eyes, the obtained 3D ultrasound image shows a visual effect diagram of the rolling-like cluster body.
Fig. 8 (i.e., fig. 8(a) and 8(b)) is a schematic flow diagram of an ultrasound imaging method according to some embodiments of the invention. It should be understood that, although the steps in the flowchart of fig. 8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in fig. 8 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed in parallel or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In the above embodiments, only the implementation manners of the corresponding steps are described in the detailed description, and then, in the case that logics are not contradictory, the above embodiments may be combined with each other to form a new technical solution, and the new technical solution is still within the disclosure scope of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is carried in a non-volatile computer-readable storage medium (such as ROM, magnetic disk, optical disk, server cloud space), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Based on the ultrasonic imaging display method, the invention also provides an ultrasonic imaging system, which comprises:
a probe 1;
a transmission circuit 2 for exciting the probe 1 to scan an ultrasonic beam of a target emitter;
a receiving circuit 4 and a beam synthesis module 5, configured to receive echoes of the bulk ultrasound beams and obtain bulk ultrasound echo signals;
a data processing module 9, configured to obtain three-dimensional ultrasound image data of at least a part of the scan target according to the volumetric ultrasound echo signal, and obtain fluid velocity vector information of a target point in the scan target based on the volumetric ultrasound echo signal;
the 3D image processing module 11 is configured to mark fluid velocity vector information of a target point in the three-dimensional ultrasound image data to form a fluid velocity vector identifier, and obtain volumetric image data including the fluid velocity vector identifier;
a parallax image generation module 12, configured to convert the volume image data into two paths of parallax image data; and the display screen display device 8 is used for receiving and displaying the two paths of parallax image data.
The transmitting circuit 2 is configured to perform the step S100, the receiving circuit 4 and the beam forming module 5 are configured to perform the step S200, the data processing module 9 includes a signal processing module 6 and/or an image processing module 7, the signal processing module 6 is configured to perform the calculation process of the information about the velocity component vector and the fluid velocity vector, that is, the step S400, and the image processing module 7 is configured to perform the process of the image processing, that is, the step S300 is configured to acquire three-dimensional ultrasound image data of at least a portion of the scan target according to the obtained volume ultrasound echo signal. The 3D image processing module 11 is configured to execute the step S500, and the parallax image generating module 12 is configured to execute the step S600. The display screen display device 8 performs 3D ultrasound imaging display, and performs the above step S700. The execution steps of the above functional modules refer to the related step description of the ultrasound imaging display method, which is not described herein in detail.
In some embodiments of the present invention, the 3D image processing module 11 is further configured to mark the fluid velocity vectors sequentially obtained when the target point moves continuously to the corresponding position in the three-dimensional ultrasound image data, so that the fluid velocity vectors identify a flow-like visual effect that changes with time when the display is output.
In some embodiments of the present invention, the display screen display device 8 comprises: the display screen and the wearable glasses are used for receiving and displaying the two paths of parallax image data, or the naked eye 3D display screen is used for receiving and displaying the two paths of parallax image data. For specific description, refer to the description above.
In some embodiments of the invention, echo signals of the volume plane ultrasound beam are used to calculate information about the fluid velocity vector and the fluid velocity vector, as well as three-dimensional ultrasound image data. For example, the transmit circuitry is used to excite the probe to scan a target emitter plane ultrasound beam; the receiving circuit and the beam synthesis module are used for receiving the echo of the plane body ultrasonic beam and obtaining a body plane ultrasonic echo signal; the data processing module is further used for acquiring three-dimensional ultrasonic image data of at least one part of the scanning target and fluid velocity vector information of the target point according to the volume plane ultrasonic echo signal.
For example, the echo signal of the ultrasonic beam of the volume plane is used to calculate the information about the velocity vector and the fluid velocity vector, and the echo signal of the ultrasonic beam of the volume focus is used to obtain the ultrasonic image with high quality, so that the transmitting circuit excites the probe to focus the ultrasonic beam to the scanning target emitter; the receiving circuit and the beam synthesis module are used for receiving the echo of the volume focusing ultrasonic beam and obtaining a volume focusing ultrasonic echo signal; the data processing module is used for acquiring three-dimensional ultrasonic image data of at least one part of the scanning target according to the volume focusing ultrasonic echo signal. In addition, the above-mentioned transmitting circuit excites the planar ultrasonic beam of the probe toward the scanning target emitter, and intervenes in the process of focusing the ultrasonic beam toward the scanning target emitter in the process of transmitting the planar ultrasonic beam toward the scanning target; the receiving circuit and the beam synthesis module are used for receiving the echo of the ultrasonic beam of the body plane and obtaining an ultrasonic echo signal of the body plane; the data processing module is used for obtaining fluid velocity vector information of a target point in the scanning target according to the ultrasonic echo signal of the body plane. As for the manner of alternately performing transmission of the two beam types, the foregoing related contents are referred to, and will not be described herein in a repeated manner.
In addition, the data processing module is further used for obtaining enhanced three-dimensional ultrasonic image data of at least one part of the scanning target through a gray-scale blood flow imaging technology according to the volume ultrasonic echo signal. The 3D image processing module is further configured to segment an area of interest in the enhanced three-dimensional ultrasound image data, where the area of interest is used to represent a fluid area, to obtain a cloud-like cluster area block, mark the cloud-like cluster area block in the three-dimensional ultrasound image data for display, and to obtain volume image data including a cluster body, so that the cluster body exhibits a rolling-like visual effect changing with time when being output and displayed. See the relevant description above for specific implementations.
As another example, in some embodiments of the present invention, as in fig. 1, the system further includes: the human-computer interaction equipment is used for acquiring a command input by a user; the 3D image processing module is further configured to perform at least one of the following steps:
selecting the distribution density through a cursor displayed in a moving image or through gesture input, acquiring a distribution density instruction input by a user, and randomly selecting the target point in the scanning target according to the distribution density instruction;
selecting the position of a target point through a cursor displayed in a moving image or through gesture input, acquiring a mark position instruction input by a user, and acquiring the target point according to the mark position instruction;
randomly selecting the target point in the scanning target according to a preset distribution density;
acquiring a mode switching instruction input by a user, and switching from a current display mode to a display mode for outputting and displaying a cluster body to enable the cluster body to present a rolling visual effect changing along with time, wherein enhanced three-dimensional ultrasonic image data is obtained through a gray scale blood flow imaging technology, and an interested area representing a fluid area is segmented from the enhanced three-dimensional ultrasonic image data to obtain the cluster body;
setting different transparencies hierarchically to the three-dimensional ultrasonic image data according to commands input by a user
Configuring color parameters of a three-dimensional image area which is included in the volume image data and used for presenting each tissue structure according to the anatomical tissue structure and the hierarchical relationship according to a command input by a user;
configuring one or two parameter combinations of color, three-dimensional shape and transparency identified in the fluid velocity vector according to a command input by a user;
configuring color information of the cluster region block according to a command input by a user;
configuring color information and shape parameters of the associated mark according to a command input by a user;
configuring the position or the parameter of a cursor displayed in the 3D image according to a command input by a user, wherein the display screen display device is also used for displaying the cursor in the image; and
and according to a command input by a user, switching the type of the ultrasonic beam of the probe, which is used for exciting the probe to scan the target emitter, by the transmitting circuit.
The above steps related to the 3D image processing module performing the corresponding operation according to the command input by the user are referred to the above related contents, and will not be described again here.
In some embodiments of the present invention, as shown in fig. 25, the 3D image processing module is configured to mark fluid velocity vector information of a target point over time in the three-dimensional ultrasound image data, and obtain the volumetric image data containing a fluid velocity vector identifier; the system further includes a spatial stereoscopic display apparatus 800 for displaying the volume image data as a dynamic spatial stereoscopic image based on a true three-dimensional stereoscopic image display technology, wherein the spatial stereoscopic display apparatus 800 includes one of a holographic display device based on a holographic display technology and a volume pixel display device based on a volumetric three-dimensional display technology; the parallax image generating module includes a first camera device 841 and a second camera device 842, and the first camera device 841 and the second camera device 842 shoot the dynamic space stereo image from two angles to obtain the two paths of parallax image data. The first image pickup device and the second image pickup device may be of the same structure, and for example, both are an infrared camera, an optical camera, or the like.
The above-mentioned spatial stereoscopic display apparatus 8 includes one of a holographic display device based on a holographic display technique and a volume pixel display device based on a volume three-dimensional display technique. See the related description of step S500 in the foregoing, as shown in fig. 15 to 17.
In some embodiments of the present invention, as shown in fig. 25, the human-computer interaction device 10 includes: and the electronic equipment 840 with the touch display screen is connected with the data processing module. The electronic device 840 is connected to the data processing module 9 through a communication interface (wireless or wired communication interface), and is configured to receive three-dimensional ultrasound image data and fluid velocity vector information of a target point for displaying on the touch display screen, and present an ultrasound image (the ultrasound image may be a two-dimensional or three-dimensional ultrasound image displayed based on the three-dimensional ultrasound image data) and the fluid velocity vector information superimposed on the ultrasound image; receiving an operation command input by a user on the touch display screen, and transmitting the operation command to the data processing module 9, where the operation command may include any one or more commands input by the user according to the data processing module 9; the data processing module 9 is configured to obtain a related configuration or switching instruction according to the operation command, and transmit the related configuration or switching instruction to the spatial stereoscopic display apparatus 800; the spatial stereo display apparatus 800 is configured to adjust a display result of a spatial stereo image according to a configuration or switching instruction, so as to synchronously display control results of image rotation, image parameter configuration, image display mode switching, and the like, which are performed according to an operation command input by a user on the touch display screen, on the spatial stereo image. As shown in fig. 25, the spatial stereo display apparatus 800 adopts the holographic display device shown in fig. 15, and provides a way for the observer to input an operation command by synchronously displaying an ultrasound image and fluid velocity vector information superimposed on the ultrasound image on the electronic device 840 connected to the data processing module 9, and interacts with the displayed spatial stereo image in this way.
Furthermore, in some embodiments of the present invention, the human-computer interaction device 10 may also be a physical operation key (such as a keyboard, a joystick, a scroll wheel, etc.), a virtual keyboard, or a gesture input device with a camera, for example. The gesture input device herein includes: the device for gesture input is tracked by acquiring an image during gesture input and utilizing an image recognition technology, for example, an infrared camera acquires the image of gesture input to obtain an operation instruction represented by the gesture input by utilizing the image recognition technology.
Based on the above embodiments, the present invention also provides a three-dimensional ultrasound fluid imaging system, which includes:
a probe 1;
a transmission circuit 2 for exciting the probe 1 to scan an ultrasonic beam of a target emitter;
a receiving circuit 4 and a beam synthesis module 5, configured to receive echoes of the bulk ultrasound beams and obtain bulk ultrasound echo signals;
a data processing module 9, configured to obtain enhanced three-dimensional ultrasound image data of at least a part of the scan target by a gray-scale blood flow imaging technique according to the volume ultrasound echo signal;
a 3D image processing module 11, configured to segment an interested region in the enhanced three-dimensional ultrasound image data, where the interested region is used to represent a fluid region, to obtain a cloud-like cluster region block, and mark the cloud-like cluster region block in the three-dimensional ultrasound image data, to obtain volume image data including a cloud-like cluster;
a parallax image generation module 12, configured to convert the volume image data into two paths of parallax image data;
and a display screen display device 8 for outputting and displaying the two paths of parallax image data so that human eyes can observe the visual effect of the rolling clusters with time.
The transmitting circuit 2 is configured to perform the step S100, the receiving circuit 4 and the beam combining module 5 are configured to perform the step S200, the data processing module 9 includes a signal processing module 6 and/or an image processing module 7, the signal processing module 6 is configured to perform processing on the combined echo signal, and the image processing module 7 is configured to perform an image processing procedure on the enhanced three-dimensional ultrasound image data, that is, the step S310 acquires three-dimensional ultrasound image data of at least a part of the scan target according to the volume ultrasound echo signal obtained within the preset time period. The 3D image processing module 11 is configured to perform the above-mentioned segmentation and labeling process for the cluster in the enhanced three-dimensional ultrasound image data in step S510, and the parallax image generating module 12 is configured to perform step S600. The display screen display device 8 performs 3D ultrasound imaging display, and performs the above step S700. The execution steps of the above functional modules refer to the related step description of the ultrasound imaging display method, which is not described herein in detail.
In some embodiments of the present invention, the 3D image processing module is further configured to convert the three-dimensional ultrasound image data into volume image data with a perspective effect, and mark a cloud-like cluster region block that changes with time in the volume image data.
In some embodiments of the invention, the 3D image processing module is further configured to:
setting different transparencies hierarchically for each frame of three-dimensional ultrasonic image data, marking a cloud-shaped cluster region block in each frame of three-dimensional ultrasonic image data to obtain a single-frame body image containing the cloud-shaped cluster body, and forming the body image data by continuous multi-frame body images along with time; or,
based on three-dimensional drawing software, converting each frame of three-dimensional ultrasonic image data into a pair of three-dimensional perspective effect images, marking cloud-shaped cluster region blocks in each pair of three-dimensional effect images to obtain single frame images containing cloud-shaped cluster bodies, and forming the volume image data by continuous multi-frame images along with time.
In some embodiments of the present invention, the 3D image processing module is further configured to perform the following steps to convert the three-dimensional ultrasound image data into perspective volume image data:
making parallel section or concentric sphere section for the three-dimensional ultrasonic image data, and setting each section to different transparency or setting a plurality of sections to stepwise and gradually changed transparency; and/or the presence of a gas in the gas,
and carrying out tissue structure segmentation on the three-dimensional ultrasonic image data, and setting different transparencies of tissue structure regions obtained by segmentation.
In some embodiments of the present invention, the 3D image processing module is further configured to:
in the step of segmenting the region of interest in the enhanced three-dimensional ultrasonic image data for representing the fluid region to obtain cloud-shaped cluster region blocks, segmenting the region of interest in the enhanced three-dimensional ultrasonic image data for representing the fluid region based on image gray scale to obtain cluster region blocks with different gray scale characteristics, and rendering the cluster region blocks with different gray scale characteristics in the three-dimensional ultrasonic image data through different colors; or,
and (4) superposing different colors on the same cloud-shaped cluster region block obtained by segmentation according to the gray level change of different regions in the cluster region block for rendering.
The above-mentioned functions related to the 3D image processing module can be referred to the related descriptions in the foregoing.
In summary, the present invention provides a three-dimensional ultrasound fluid imaging method and an ultrasound imaging system, which are applicable to imaging and displaying blood flow information, and provide a better viewing angle of a 3D ultrasound image for a user by means of a currently advanced display screen through a 3D stereoscopic display technique, so as to realize real-time understanding of a scanning position, display blood flow information with a more real image display effect, and truly reproduce the fluid movement condition in a scanning target, provide a multi-angle and omnidirectional viewing angle for the user, provide more comprehensive and accurate image data for medical care personnel, and create a more novel blood flow imaging display mode for the blood flow imaging display technique implemented on the ultrasound system. In addition, the invention also provides a novel display method for calculating the fluid velocity vector information of the target point, which can more truly provide the condition data of the actual flowing state of the fluid and intuitively embody the moving track of the target point along the flow direction and according to the flow direction. Meanwhile, the invention also provides more personalized self-defined service, and provides more accurate and more visualized data support for the user to observe the real fluid state conveniently.
The invention also provides a display mode capable of presenting a gray scale enhancement effect on the ultrasonic stereo image, wherein the image with gray scale change of the interested region is represented by different colors, and the flowing condition of the cluster region is dynamically presented.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for those skilled in the art, various changes, modifications and combinations can be made without departing from the spirit of the invention, and all such changes, modifications and combinations are within the scope of the invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (43)
- A method of three-dimensional ultrasonic fluid imaging, comprising:emitting an ultrasonic beam toward a scan target;receiving the echo of the ultrasonic beam of the body to obtain an ultrasonic echo signal of the body;acquiring three-dimensional ultrasonic image data of at least one part of the scanning target according to the volume ultrasonic echo signal;obtaining fluid velocity vector information of a target point in the scanning target based on the volume ultrasonic echo signal;marking fluid velocity vector information of a target point in the three-dimensional ultrasonic image data to form a fluid velocity vector identifier, and obtaining volume image data containing the fluid velocity vector identifier;converting the volume image data into two paths of parallax image data;and outputting and displaying the two paths of parallax image data.
- The three-dimensional ultrasound fluid imaging method according to claim 1, wherein the fluid velocity vector information of the target point comprises: and the target point continuously moves to a corresponding position in the three-dimensional ultrasonic image data to sequentially correspond to the obtained fluid velocity vectors, so that the fluid velocity vector mark presents a flowing visual effect which changes along with time when being output and displayed.
- The method according to claim 1, wherein the step of obtaining the volume image data containing the fluid velocity vector identifier comprises converting the three-dimensional ultrasound image data into perspective volume image data, and marking the fluid velocity vector information of the target point over time in the volume image data to form the fluid velocity vector identifier that can change over time.
- The method of claim 1, wherein the fluid velocity vector information marking a target point in the three-dimensional ultrasound image data forms a fluid velocity vector identifier, and the step of obtaining volumetric image data containing the fluid velocity vector identifier comprises:setting different transparencies hierarchically for each frame of three-dimensional ultrasonic image data, marking fluid velocity vector information of a target point at a corresponding position in each frame of three-dimensional ultrasonic image data, obtaining a single-frame volume image containing a fluid velocity vector identifier, and forming the volume image data by continuous multiple frame volume images along with time; or,converting each frame of three-dimensional ultrasonic image data into a pair of three-dimensional perspective effect images based on three-dimensional drawing software, marking fluid velocity vector information at the corresponding position of a target point in each pair of three-dimensional effect images to obtain a single-frame body image containing a fluid velocity vector identifier, and forming the body image data by a plurality of continuous frame body images along with time; or,and displaying the three-dimensional ultrasonic image data into a dynamic space stereo image based on a true three-dimensional stereo image display technology, and marking fluid velocity vector information of a target point changing along with time in the space stereo image to obtain the volume image data.
- The method of three-dimensional ultrasound fluid imaging according to claim 4, wherein said step of hierarchically setting different transparencies for said three-dimensional ultrasound image data comprises:making parallel section or concentric sphere section for the three-dimensional ultrasonic image data, and setting each section to different transparencies or setting a plurality of sections to have stepwise and gradually changed transparencies in sequence; and/or the presence of a gas in the gas,and carrying out tissue structure segmentation on the three-dimensional ultrasonic image data, and setting different transparencies of tissue structure regions obtained by segmentation.
- The method of claim 1, wherein the step of converting the volumetric image data into two-way parallax image data comprises:extracting a first time phase volume image and a second time phase volume image which are adjacent in time in the volume image data, generating one path of parallax image data by any parallax number according to the first time phase volume image, and generating the other path of parallax image data by the same parallax number according to the second time phase volume image, thereby obtaining the two paths of parallax image data; or,and playing the volume image data, simulating the left eye and the right eye of a human to establish two observation visual angles, and shooting the played volume image data at the two observation visual angles respectively so as to obtain the two paths of parallax image data.
- The method of claim 1, wherein the step of acquiring three-dimensional ultrasound image data of at least a portion of the scan target further comprises:obtaining enhanced three-dimensional ultrasound image data of at least a portion of the scan target by a gray scale blood flow imaging technique;the step of executing the fluid velocity vector information for marking the target point in the three-dimensional ultrasonic image data to obtain the volume image data containing the fluid velocity vector identification includes:segmenting an interested area used for representing a fluid area in the enhanced three-dimensional ultrasonic image data to obtain a cloud-shaped cluster area block;marking the cloud-shaped cluster body area blocks in the three-dimensional ultrasonic image data to form cluster bodies, and obtaining volume image data containing the cluster bodies so that the cluster bodies can present a rolling visual effect changing along with time when being output and displayed.
- The method of claim 7, further comprising the step of superimposing color information on the cloud-like cluster region block, the step comprising:segmenting an interested area used for representing a fluid area in the enhanced three-dimensional ultrasonic image data based on image gray scale, obtaining cluster area blocks with different gray scale characteristics, and rendering the cluster area blocks with different gray scale characteristics in the three-dimensional ultrasonic image data through different colors; or,for the same cloud-shaped cluster body area block, different colors are superposed according to the gray level change of different area bodies in the cluster body area block for rendering; or,and according to the speed information of the fluid region represented by the cluster region block, correspondingly setting color information is superposed on the cluster region block.
- The method of claim 1, wherein the fluid velocity vector identification employs a volumetric marker and represents the magnitude of the fluid velocity vector by the volumetric magnitude or rotational velocity of the volumetric marker and/or characterizes the direction of the fluid velocity vector by pointing the arrow on the volumetric marker, pointing the direction guide, or moving the volumetric marker over time.
- The three-dimensional ultrasound fluid imaging method according to claim 1, wherein in the obtaining of fluid velocity vector information of a target point within the scan target based on the volumetric ultrasound echo signal, the target point is selected by performing one of the following steps:selecting distribution density through a cursor displayed in a moving image or through gesture input, acquiring a distribution density instruction input by a user, and randomly selecting the target point in the scanning target according to the distribution density instruction;selecting the position of a target point through a cursor displayed in a moving image or through gesture input, acquiring a mark position instruction input by a user, and acquiring the target point according to the mark position instruction; andand randomly selecting the target point in the scanning target according to the preset distribution density.
- The method of claim 2, wherein the step of marking fluid velocity vector information of a target point in the three-dimensional ultrasound image data is performed further comprises:and sequentially crossing the same target point through the associated marks and continuously moving to a plurality of corresponding positions in the three-dimensional ultrasonic image data to form a motion travel track of the target point, so as to display the motion travel track during output display.
- The method of three-dimensional ultrasonic fluid imaging according to claim 11, wherein the correlation marker comprises an elongated cylinder, a segmented elongated cylinder, or a comet tail marker.
- The three-dimensional ultrasound fluid imaging method according to claim 1, wherein the step of obtaining fluid velocity vector information of a target point within the scan target based on the volumetric ultrasound echo signals comprises:obtaining at least two frames of three-dimensional ultrasonic image data according to the volume ultrasonic echo signal;obtaining a gradient along a time direction at a target point according to the three-dimensional ultrasonic image data, and obtaining a first velocity component along an ultrasonic wave propagation direction at the target point according to the three-dimensional ultrasonic image data;according to the gradient and the first velocity component, respectively obtaining a second velocity component along a first direction and a third velocity component along a second direction at a target point, wherein the first direction, the second direction and the ultrasonic propagation direction are mutually vertical in pairs;and synthesizing the fluid velocity vector of the target point according to the first velocity component, the second velocity component and the third velocity component.
- The method of claim 1, wherein the process from the step of emitting an ultrasound beam toward a scanning target to acquiring three-dimensional ultrasound image data and fluid velocity vector information of a target point includes:a planar ultrasonic beam is emitted toward a scanning target,receiving the echo of the ultrasonic wave beam of the body plane to obtain an ultrasonic echo signal of the body plane,acquiring the three-dimensional ultrasonic image data according to the volume plane ultrasonic echo signal,obtaining fluid velocity vector information of the target point based on the volume plane ultrasonic echo signal;or,respectively emitting a planar ultrasonic beam and a volume-focused ultrasonic beam to a scan object,receiving the echo of the ultrasonic wave beam of the body plane to obtain an ultrasonic echo signal of the body plane,receiving echoes of the volume focusing ultrasonic beams to obtain volume focusing ultrasonic echo signals,acquiring the three-dimensional ultrasonic image data according to the volume focusing ultrasonic echo signal,and obtaining fluid velocity vector information of the target point based on the volume plane ultrasonic echo signal.
- The method of three-dimensional ultrasound fluid imaging according to claim 1, wherein the step of receiving the echo of the volumetric ultrasound beam to obtain a volumetric ultrasound echo signal comprises:receiving echoes from a plurality of scanning body upper body ultrasonic beams to obtain a plurality of sets of body ultrasonic echo signals, wherein an ultrasonic transmitting array element is excited to transmit ultrasonic beams to a scanning target along a plurality of ultrasonic propagation directions, and the body ultrasonic beams are propagated in a space where the scanning target is located to form a plurality of scanning bodies;the step of obtaining fluid velocity vector information of a target point in the scan target based on the volumetric ultrasound echo signal comprises:calculating a velocity component of a target point in the scanning target based on a group of body ultrasonic echo signals in the multiple group of body ultrasonic echo signals, and respectively acquiring multiple velocity components according to the multiple group of body ultrasonic echo signals;and synthesizing and obtaining the fluid velocity vector of the target point according to the plurality of velocity components, and generating the fluid velocity vector information of the target point.
- A method of three-dimensional ultrasonic fluid imaging, comprising:emitting an ultrasonic beam toward a scan target;receiving the echo of the ultrasonic beam of the body to obtain an ultrasonic echo signal of the body;obtaining enhanced three-dimensional ultrasonic image data of at least one part of the scanning target by a gray scale blood flow imaging technology according to the volume ultrasonic echo signal;segmenting an interested area used for representing a fluid area in the enhanced three-dimensional ultrasonic image data to obtain a cloud-shaped cluster area block;marking the cloud-shaped cluster region block-shaped cluster body composition in the three-dimensional ultrasonic image data to obtain volume image data containing the cluster body;converting the volume image data into two paths of parallax image data;and outputting and displaying the two paths of parallax image data so that the cluster body presents a rolling visual effect changing along with time when being output and displayed.
- The method of claim 16, wherein the step of marking the cloud-like cluster region block in the three-dimensional ultrasound image data, obtaining volumetric image data including a cloud-like cluster body converts the three-dimensional ultrasound image data into volumetric image data with perspective effect, and marks the cloud-like cluster region block in the volumetric image data as a function of time.
- The method of claim 16, wherein the step of marking the cloud-like cluster region block in the three-dimensional ultrasound image data and the step of obtaining volumetric image data comprising cloud-like clusters comprises:setting different transparencies hierarchically for each frame of three-dimensional ultrasonic image data, marking a cloud-shaped cluster region block in each frame of three-dimensional ultrasonic image data to obtain a single-frame body image containing a cloud-shaped cluster body, and forming the body image data by a plurality of continuous frame body images along with time; or,converting each frame of three-dimensional ultrasonic image data into a pair of three-dimensional perspective effect graphs based on three-dimensional drawing software, marking cloud-shaped cluster region blocks in each pair of three-dimensional effect graphs to obtain a single frame image containing cloud-shaped cluster bodies, and forming the volume image data by continuous multi-frame images along with time; or,displaying the three-dimensional ultrasonic image data into a dynamic space stereo image based on a true three-dimensional stereo image display technology, and marking a cloud-shaped cluster area block which changes along with time in the space stereo image to obtain the volume image data.
- The method of three-dimensional ultrasound fluid imaging according to claim 18, wherein said step of hierarchically setting different transparencies for said three-dimensional ultrasound image data comprises:making parallel section or concentric sphere section for the three-dimensional ultrasonic image data, and setting each section to different transparencies or setting a plurality of sections to have stepwise and gradually changed transparencies in sequence; and/or the presence of a gas in the gas,and carrying out tissue structure segmentation on the three-dimensional ultrasonic image data, and setting different transparencies of tissue structure regions obtained by segmentation.
- The method of claim 16, wherein the step of converting the volumetric image data into two-way parallax image data comprises:extracting a first time phase volume image and a second time phase volume image which are adjacent in time in the volume image data, generating one path of parallax image data by any parallax number according to the first time phase volume image, and generating the other path of parallax image data by the same parallax number according to the second time phase volume image, thereby obtaining the two paths of parallax image data; or,and playing the volume image data, simulating the left eye and the right eye of a human to establish two observation visual angles, and shooting the played volume image data at the two observation visual angles respectively so as to obtain the two paths of parallax image data.
- The method of claim 16, further comprising the step of superimposing color information on the cloud-like cluster region block, the step comprising:segmenting an interested area used for representing a fluid area in the enhanced three-dimensional ultrasonic image data based on image gray scale, obtaining cluster area blocks with different gray scale characteristics, and rendering the cluster area blocks with different gray scale characteristics in the three-dimensional ultrasonic image data through different colors; or,for the same cloud-shaped cluster body area block, different colors are superposed according to the gray level change of different area bodies in the cluster body area block for rendering; or,and according to the speed information of the fluid region represented by the cluster region block, correspondingly setting color information is superposed on the cluster region block.
- A three-dimensional ultrasound fluid imaging system, comprising:a probe;a transmitting circuit for exciting the probe to scan an ultrasonic beam of a target emitter;the receiving circuit and the beam synthesis module are used for receiving the echo of the body ultrasonic beam and obtaining a body ultrasonic echo signal;the data processing module is used for acquiring three-dimensional ultrasonic image data of at least one part of the scanning target according to the volume ultrasonic echo signal and acquiring fluid velocity vector information of a target point in the scanning target based on the volume ultrasonic echo signal;the 3D image processing module is used for marking fluid velocity vector information of a target point in the three-dimensional ultrasonic image data to form a fluid velocity vector identifier and obtaining volume image data containing the fluid velocity vector identifier;the parallax image generation module is used for converting the volume image data into two paths of parallax image data; andand the display screen display device is used for receiving and displaying the two paths of parallax image data.
- The three-dimensional ultrasound fluid imaging system according to claim 22, wherein the 3D image processing module is further configured to mark the fluid velocity vectors sequentially obtained as the target point moves continuously to the corresponding location in the three-dimensional ultrasound image data, such that the fluid velocity vectors identify a flow-like visual effect that appears to vary with time when displayed as an output.
- The three-dimensional ultrasound fluid imaging system according to claim 22, wherein the display screen display device comprises:a display screen and wearable glasses for receiving and displaying the two paths of parallax image data, orAnd the naked eye 3D display screen is used for receiving and displaying the two paths of parallax image data.
- The three-dimensional ultrasound fluid imaging system according to claim 22, wherein the 3D image processing module is further configured to: and converting the three-dimensional ultrasonic image data into volume image data with perspective effect, and marking fluid velocity vector information of a target point changing along with time in the volume image data to form the fluid velocity vector identifier which can change along with time.
- The three-dimensional ultrasound fluid imaging system according to claim 22, wherein the 3D image processing module is further configured to:setting different transparencies hierarchically for each frame of three-dimensional ultrasonic image data, marking fluid velocity vector information of a target point at a corresponding position in each frame of three-dimensional ultrasonic image data, obtaining a single-frame volume image containing a fluid velocity vector identifier, and forming the volume image data by continuous multiple frame volume images along with time; or,based on three-dimensional drawing software, converting each frame of three-dimensional ultrasonic image data into a pair of three-dimensional perspective effect images, marking fluid velocity vector information at corresponding positions of target points in each pair of three-dimensional effect images, obtaining a single-frame body image containing fluid velocity vector identification, and forming the body image data by a plurality of continuous frame body images along with time.
- The three-dimensional ultrasound fluid imaging system of claim 22, wherein the 3D image processing module is further configured to hierarchically set different degrees of transparency to the three-dimensional ultrasound image data by:making parallel section or concentric sphere section for the three-dimensional ultrasonic image data, and setting each section to different transparencies or setting a plurality of sections to have stepwise and gradually changed transparencies in sequence; and/or the presence of a gas in the gas,and carrying out tissue structure segmentation on the three-dimensional ultrasonic image data, and setting different transparencies of tissue structure regions obtained by segmentation.
- The three-dimensional ultrasound fluid imaging system according to claim 22, wherein the parallax image generation module is to:extracting a first time phase volume image and a second time phase volume image which are adjacent in time in the volume image data, generating one path of parallax image data by any parallax number according to the first time phase volume image, and generating the other path of parallax image data by the same parallax number according to the second time phase volume image, thereby obtaining the two paths of parallax image data; or,and playing the volume image data, simulating the left eye and the right eye of a human to establish two observation visual angles, and shooting the played volume image data at the two observation visual angles respectively so as to obtain the two paths of parallax image data.
- The three-dimensional ultrasound fluid imaging system according to claim 22, wherein the data processing module is further configured to obtain enhanced three-dimensional ultrasound image data of at least a portion of the scan target by a gray scale blood flow imaging technique based on the volumetric ultrasound echo signals;the 3D image processing module is further configured to segment an area of interest in the enhanced three-dimensional ultrasound image data, where the area of interest is used to represent a fluid area, to obtain a cloud-shaped cluster area block, mark the cloud-shaped cluster area block in the three-dimensional ultrasound image data for display, and to obtain volume image data including a cluster body, so that the cluster body exhibits a rolling-shaped visual effect that changes with time when being output and displayed.
- The three-dimensional ultrasound fluid imaging system according to claim 29, wherein the 3D image processing module is further configured to segment a region of interest in the enhanced three-dimensional ultrasound image data characterizing a fluid region based on image gray scale, obtain blocks of cluster regions of different gray scale features, and render the blocks of different gray scale features of cluster regions in the three-dimensional ultrasound image data with different colors; or,for the same cloud-shaped cluster area block obtained by segmentation, different colors are superposed according to the gray level change of different area bodies in the cluster area block for rendering; or,and according to the speed information of the fluid region represented by the cluster region block, correspondingly setting color information is superposed on the cluster region block.
- The three-dimensional ultrasound fluid imaging system according to claim 22, wherein the fluid velocity vector identification employs a volumetric marker and the magnitude of the fluid velocity vector is represented by a volumetric magnitude or a rotational velocity of the volumetric marker and/or the direction of the fluid velocity vector is characterized by pointing of an arrow on the volumetric marker, pointing of a direction guide, or moving the volumetric marker over time.
- The three-dimensional ultrasound fluid imaging system according to claim 23, wherein the 3D image processing module is further configured to: and sequentially crossing the same target point through the associated marks and continuously moving to a plurality of corresponding positions in the three-dimensional ultrasonic image data to form a motion travel track of the target point, so as to display the motion travel track during output display.
- The three-dimensional ultrasound fluid imaging system according to claim 22, wherein in the system, the transmitting circuit is configured to excite the probe to scan a target emitter plane ultrasonic beam, the receiving circuit and the beam forming module are configured to receive an echo of the received volume plane ultrasonic beam to obtain a volume plane ultrasonic echo signal, and the data processing module is configured to acquire the three-dimensional ultrasound image data according to the volume plane ultrasonic echo signal and obtain fluid velocity vector information of the target point based on the volume plane ultrasonic echo signal; or,the transmitting circuit is used for exciting the probe to emit a plane ultrasonic beam and a volume focusing ultrasonic beam to a scanning target respectively, the receiving circuit and the beam synthesis module are used for receiving an echo of the volume plane ultrasonic beam to obtain a volume plane ultrasonic echo signal, the data processing module is used for receiving the echo of the volume focusing ultrasonic beam to obtain a volume focusing ultrasonic echo signal, the three-dimensional ultrasonic image data is obtained according to the volume focusing ultrasonic echo signal, and the fluid velocity vector information of the target point is obtained based on the volume plane ultrasonic echo signal.
- The three-dimensional ultrasound fluid imaging system according to claim 22, wherein the 3D image processing module is configured to mark fluid velocity vector information of a target point over time in the three-dimensional ultrasound image data, obtaining the volumetric image data containing fluid velocity vector identification;the system further comprises: a spatial stereoscopic display apparatus for displaying the volumetric image data as a dynamic spatial stereoscopic image based on a true three-dimensional stereoscopic image display technology, the spatial stereoscopic display apparatus including one of a holographic display device based on the holographic display technology and a volumetric pixel display device based on the volumetric three-dimensional display technology;the parallax image generation module comprises a first camera device and a second camera device, and the first camera device and the second camera device shoot the dynamic space stereo image from two angles to obtain the two paths of parallax image data.
- A three-dimensional ultrasound fluid imaging system, comprising:a probe;a transmitting circuit for exciting the probe to scan an ultrasonic beam of a target emitter;the receiving circuit and the beam synthesis module are used for receiving the echo of the body ultrasonic beam and obtaining a body ultrasonic echo signal;the data processing module is used for obtaining enhanced three-dimensional ultrasonic image data of at least one part of the scanning target through a gray scale blood flow imaging technology according to the volume ultrasonic echo signal;the 3D image processing module is used for segmenting an interested region used for representing a fluid region in the enhanced three-dimensional ultrasonic image data to obtain a cloud-shaped cluster region block, marking the cloud-shaped cluster region block in the three-dimensional ultrasonic image data and obtaining volume image data containing the cloud-shaped cluster;the parallax image generation module is used for converting the volume image data into two paths of parallax image data;and the display screen display device is used for outputting and displaying the two paths of parallax image data so as to enable the cluster body to present a rolling visual effect changing along with time when being output and displayed.
- The method of claim 35, wherein the 3D image processing module is further configured to convert the three-dimensional ultrasound image data into perspective volume image data and mark the time-varying cloud-like cluster region blocks in the volume image data.
- The three-dimensional ultrasound fluid imaging method according to claim 35, wherein the 3D image processing module is further configured to:setting different transparencies hierarchically for each frame of three-dimensional ultrasonic image data, marking a cloud-shaped cluster region block in each frame of three-dimensional ultrasonic image data to obtain a single-frame body image containing a cloud-shaped cluster body, and forming the body image data by a plurality of continuous frame body images along with time; or,converting each frame of three-dimensional ultrasonic image data into a pair of three-dimensional perspective effect graphs based on three-dimensional drawing software, marking cloud-shaped cluster region blocks in each pair of three-dimensional effect graphs, obtaining a single frame body image containing cloud-shaped cluster bodies, and forming the body image data by a plurality of frame body images which are continuous along with time.
- The method of three-dimensional ultrasound fluid imaging according to claim 36, wherein the 3D image processing module is further configured to perform the following steps to convert the three-dimensional ultrasound image data into perspective-effect volumetric image data:making parallel section or concentric sphere section for the three-dimensional ultrasonic image data, and setting each section to different transparencies or setting a plurality of sections to have stepwise and gradually changed transparencies in sequence; and/or the presence of a gas in the gas,and carrying out tissue structure segmentation on the three-dimensional ultrasonic image data, and setting different transparencies of tissue structure regions obtained by segmentation.
- The three-dimensional ultrasound fluid imaging method according to claim 35, wherein the parallax image generation module is further configured to:extracting a first time phase volume image and a second time phase volume image which are adjacent in time in the volume image data, generating one path of parallax image data by any parallax number according to the first time phase volume image, and generating the other path of parallax image data by the same parallax number according to the second time phase volume image, thereby obtaining the two paths of parallax image data; or,and playing the volume image data, simulating the left eye and the right eye of a human to establish two observation visual angles, and shooting the played volume image data at the two observation visual angles respectively so as to obtain the two paths of parallax image data.
- The three-dimensional ultrasound fluid imaging method according to claim 35, wherein the 3D image processing module is further configured to segment the region of interest in the enhanced three-dimensional ultrasound image data for characterizing the fluid region based on image gray scale to obtain cluster region blocks with different gray scale features, and render the cluster region blocks with different gray scale features in the three-dimensional ultrasound image data by different colors in the step of segmenting the region of interest in the enhanced three-dimensional ultrasound image data for characterizing the fluid region to obtain cloud-like cluster region blocks; or,rendering the same cloud-shaped cluster area block obtained by segmentation by superposing different colors according to the gray level change of different area bodies in the cluster area block; orAnd according to the speed information of the fluid region represented by the cluster region block, correspondingly setting color information is superposed on the cluster region block.
- The three-dimensional ultrasound fluid imaging system according to claim 35, wherein in the system, the transmitting circuit is configured to excite the probe to scan a target emitter plane ultrasonic beam, the receiving circuit and the beam forming module are configured to receive an echo of the received volume plane ultrasonic beam to obtain a volume plane ultrasonic echo signal, and the data processing module is configured to acquire the three-dimensional ultrasound image data according to the volume plane ultrasonic echo signal and obtain fluid velocity vector information of the target point based on the volume plane ultrasonic echo signal; or,the transmitting circuit is used for exciting the probe to emit a plane ultrasonic beam and a volume focusing ultrasonic beam to a scanning target respectively, the receiving circuit and the beam synthesis module are used for receiving an echo of the volume plane ultrasonic beam to obtain a volume plane ultrasonic echo signal, the data processing module is used for receiving the echo of the volume focusing ultrasonic beam to obtain a volume focusing ultrasonic echo signal, the three-dimensional ultrasonic image data is obtained according to the volume focusing ultrasonic echo signal, and the fluid velocity vector information of the target point is obtained based on the volume plane ultrasonic echo signal.
- The three-dimensional ultrasound fluid imaging system according to claim 35, wherein the display screen display device comprises:a display screen and wearable glasses for receiving and displaying the two paths of parallax image data, orAnd the naked eye 3D display screen is used for receiving and displaying the two paths of parallax image data.
- The three-dimensional ultrasound fluid imaging system according to claim 35, wherein said system further comprises a spatial stereo display device for displaying said volumetric image data as a dynamic spatial stereo image based on true three-dimensional stereo image display technology, said spatial stereo display device comprising one of a holographic display device based on holographic display technology and a volumetric pixel display device based on volumetric three-dimensional display technology;the parallax image generation module comprises a first camera device and a second camera device, and the first camera device and the second camera device shoot the dynamic space stereo image from two angles to obtain the two paths of parallax image data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011478109.8A CN112704516B (en) | 2015-08-04 | 2015-08-04 | Three-dimensional ultrasonic fluid imaging method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/086068 WO2017020256A1 (en) | 2015-08-04 | 2015-08-04 | Three-dimensional ultrasonic fluid imaging method and system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011478109.8A Division CN112704516B (en) | 2015-08-04 | 2015-08-04 | Three-dimensional ultrasonic fluid imaging method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107847214A true CN107847214A (en) | 2018-03-27 |
CN107847214B CN107847214B (en) | 2021-01-01 |
Family
ID=57943797
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011478109.8A Active CN112704516B (en) | 2015-08-04 | 2015-08-04 | Three-dimensional ultrasonic fluid imaging method and system |
CN201580081287.8A Active CN107847214B (en) | 2015-08-04 | 2015-08-04 | Three-dimensional ultrasonic fluid imaging method and system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011478109.8A Active CN112704516B (en) | 2015-08-04 | 2015-08-04 | Three-dimensional ultrasonic fluid imaging method and system |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN112704516B (en) |
WO (1) | WO2017020256A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111281430A (en) * | 2018-12-06 | 2020-06-16 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic imaging method, device and readable storage medium |
CN111311523A (en) * | 2020-03-26 | 2020-06-19 | 北京迈格威科技有限公司 | Image processing method, device and system and electronic equipment |
CN111358493A (en) * | 2020-03-09 | 2020-07-03 | 深圳开立生物医疗科技股份有限公司 | Data processing method, device, equipment and medium applied to ultrasonic imaging |
CN111544038A (en) * | 2020-05-12 | 2020-08-18 | 上海深至信息科技有限公司 | Cloud platform ultrasonic imaging system |
CN112294359A (en) * | 2019-07-29 | 2021-02-02 | 超声成像公司 | Ultrasound system for detecting fluid flow in an environment |
CN112767309A (en) * | 2020-12-30 | 2021-05-07 | 无锡祥生医疗科技股份有限公司 | Ultrasonic scanning method, ultrasonic equipment and system |
CN113222868A (en) * | 2021-04-25 | 2021-08-06 | 北京邮电大学 | Image synthesis method and device |
CN113362360A (en) * | 2021-05-28 | 2021-09-07 | 上海大学 | Ultrasonic carotid plaque segmentation method based on fluid velocity field |
CN114209354A (en) * | 2021-12-20 | 2022-03-22 | 深圳开立生物医疗科技股份有限公司 | Ultrasonic image display method, device and equipment and readable storage medium |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109490896B (en) * | 2018-11-15 | 2023-05-05 | 大连海事大学 | Extreme environment three-dimensional image acquisition and processing system |
US11453018B2 (en) | 2019-06-17 | 2022-09-27 | Ford Global Technologies, Llc | Sensor assembly with movable nozzle |
CN112712487B (en) * | 2020-12-23 | 2024-10-01 | 北京软通智慧科技有限公司 | Scene video fusion method, system, electronic equipment and storage medium |
CN117770870B (en) * | 2024-02-26 | 2024-05-10 | 之江实验室 | Ultrasonic imaging method and device based on double-linear-array ultrasonic field separation |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1257694A (en) * | 1998-11-23 | 2000-06-28 | 通用电气公司 | Three-dimensional ultrasound imaging of velocity and power data using mean or median pixel projections |
CN1442118A (en) * | 2002-03-05 | 2003-09-17 | 株式会社东芝 | Image treatment equipment and ultrasonic diagnosis equipment |
DE60308495T2 (en) * | 2002-02-20 | 2007-06-06 | Koninklijke Philips Electronics N.V. | PORTABLE 3D ULTRASONIC SYSTEM |
US20080027323A1 (en) * | 2004-02-26 | 2008-01-31 | Siemens Medical Solutions Usa, Inc. | Steered continuous wave doppler methods and systems for two-dimensional ultrasound transducer arrays |
CN101584589A (en) * | 2008-05-20 | 2009-11-25 | 株式会社东芝 | Image processing apparatus and computer program product |
CN104011559A (en) * | 2011-10-19 | 2014-08-27 | 维拉声学公司 | Estimation and display for vector doppler imaging using plane wave transmissions |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1985002105A1 (en) * | 1983-11-10 | 1985-05-23 | Acoustec Partners | Ultrasound diagnostic apparatus |
US5779641A (en) * | 1997-05-07 | 1998-07-14 | General Electric Company | Method and apparatus for three-dimensional ultrasound imaging by projecting filtered pixel data |
JP4137516B2 (en) * | 2002-05-20 | 2008-08-20 | 株式会社東芝 | Ultrasonic diagnostic equipment |
EP1974672B9 (en) * | 2007-03-28 | 2014-04-16 | Kabushiki Kaisha Toshiba | Ultrasonic imaging apparatus and ultrasonic velocity optimization method |
JP5226978B2 (en) * | 2007-07-17 | 2013-07-03 | 日立アロカメディカル株式会社 | Ultrasonic diagnostic apparatus and image processing program |
JP5495607B2 (en) * | 2008-05-27 | 2014-05-21 | キヤノン株式会社 | Ultrasonic diagnostic equipment |
US9204858B2 (en) * | 2010-02-05 | 2015-12-08 | Ultrasonix Medical Corporation | Ultrasound pulse-wave doppler measurement of blood flow velocity and/or turbulence |
WO2012161088A1 (en) * | 2011-05-26 | 2012-11-29 | 株式会社東芝 | Ultrasound diagnostic apparatus |
KR101348772B1 (en) * | 2011-12-29 | 2014-01-07 | 삼성메디슨 주식회사 | Ultrasound system and method for providing doppler spectrum images corresponding to at least two sample volumes |
CN102613990B (en) * | 2012-02-03 | 2014-07-16 | 声泰特(成都)科技有限公司 | Display method of blood flow rate of three-dimensional ultrasonic spectrum Doppler and space distribution of blood flow rate |
CN103876780B (en) * | 2014-03-03 | 2015-07-15 | 天津迈达医学科技股份有限公司 | High-frequency ultrasonic blood flow gray-scale imaging method and high-frequency ultrasonic blood flow gray-scale imaging device |
-
2015
- 2015-08-04 CN CN202011478109.8A patent/CN112704516B/en active Active
- 2015-08-04 WO PCT/CN2015/086068 patent/WO2017020256A1/en active Application Filing
- 2015-08-04 CN CN201580081287.8A patent/CN107847214B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1257694A (en) * | 1998-11-23 | 2000-06-28 | 通用电气公司 | Three-dimensional ultrasound imaging of velocity and power data using mean or median pixel projections |
DE60308495T2 (en) * | 2002-02-20 | 2007-06-06 | Koninklijke Philips Electronics N.V. | PORTABLE 3D ULTRASONIC SYSTEM |
CN1442118A (en) * | 2002-03-05 | 2003-09-17 | 株式会社东芝 | Image treatment equipment and ultrasonic diagnosis equipment |
US20080027323A1 (en) * | 2004-02-26 | 2008-01-31 | Siemens Medical Solutions Usa, Inc. | Steered continuous wave doppler methods and systems for two-dimensional ultrasound transducer arrays |
CN101584589A (en) * | 2008-05-20 | 2009-11-25 | 株式会社东芝 | Image processing apparatus and computer program product |
CN104011559A (en) * | 2011-10-19 | 2014-08-27 | 维拉声学公司 | Estimation and display for vector doppler imaging using plane wave transmissions |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111281430A (en) * | 2018-12-06 | 2020-06-16 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic imaging method, device and readable storage medium |
CN111281430B (en) * | 2018-12-06 | 2024-02-23 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic imaging method, device and readable storage medium |
CN112294359A (en) * | 2019-07-29 | 2021-02-02 | 超声成像公司 | Ultrasound system for detecting fluid flow in an environment |
CN112294359B (en) * | 2019-07-29 | 2024-08-09 | 声科影像有限公司 | Ultrasound system for detecting fluid flow in an environment |
CN111358493B (en) * | 2020-03-09 | 2023-04-07 | 深圳开立生物医疗科技股份有限公司 | Data processing method, device, equipment and medium applied to ultrasonic imaging |
CN111358493A (en) * | 2020-03-09 | 2020-07-03 | 深圳开立生物医疗科技股份有限公司 | Data processing method, device, equipment and medium applied to ultrasonic imaging |
CN111311523A (en) * | 2020-03-26 | 2020-06-19 | 北京迈格威科技有限公司 | Image processing method, device and system and electronic equipment |
CN111311523B (en) * | 2020-03-26 | 2023-09-05 | 北京迈格威科技有限公司 | Image processing method, device and system and electronic equipment |
CN111544038A (en) * | 2020-05-12 | 2020-08-18 | 上海深至信息科技有限公司 | Cloud platform ultrasonic imaging system |
CN111544038B (en) * | 2020-05-12 | 2024-02-02 | 上海深至信息科技有限公司 | Cloud platform ultrasonic imaging system |
CN112767309A (en) * | 2020-12-30 | 2021-05-07 | 无锡祥生医疗科技股份有限公司 | Ultrasonic scanning method, ultrasonic equipment and system |
CN113222868A (en) * | 2021-04-25 | 2021-08-06 | 北京邮电大学 | Image synthesis method and device |
CN113362360A (en) * | 2021-05-28 | 2021-09-07 | 上海大学 | Ultrasonic carotid plaque segmentation method based on fluid velocity field |
CN114209354A (en) * | 2021-12-20 | 2022-03-22 | 深圳开立生物医疗科技股份有限公司 | Ultrasonic image display method, device and equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112704516B (en) | 2023-05-26 |
CN107847214B (en) | 2021-01-01 |
CN112704516A (en) | 2021-04-27 |
WO2017020256A1 (en) | 2017-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110811687B (en) | Ultrasonic fluid imaging method and ultrasonic fluid imaging system | |
CN107847214B (en) | Three-dimensional ultrasonic fluid imaging method and system | |
CN106102587B (en) | Display methods and ultrasonic image-forming system is imaged in supersonic blood | |
Livingston et al. | Resolving multiple occluded layers in augmented reality | |
JP4065327B2 (en) | Projected image display method and apparatus | |
US8428326B2 (en) | Systems and methods for ultrasound simulation using depth peeling | |
EP2124197B1 (en) | Image processing apparatus and computer program product | |
JP2013119035A (en) | Ultrasonic image formation system and method | |
US20060126927A1 (en) | Horizontal perspective representation | |
JP2012252697A (en) | Method and system for indicating depth of 3d cursor in volume-rendered image | |
Hertel et al. | Augmented reality for maritime navigation assistance-egocentric depth perception in large distance outdoor environments | |
JP4177217B2 (en) | Ultrasonic diagnostic equipment | |
CN103220980A (en) | Ultrasound diagnostic apparatus and ultrasound image display method | |
US9224240B2 (en) | Depth-based information layering in medical diagnostic ultrasound | |
CN115136200A (en) | Rendering three-dimensional overlays on two-dimensional images | |
JP6169911B2 (en) | Ultrasonic image pickup apparatus and ultrasonic image display method | |
EP2962290B1 (en) | Relaying 3d information by depth simulation using 2d pixel displacement | |
JP4113485B2 (en) | Ultrasonic image processing device | |
Baxter et al. | Application of a three-dimensional display in diagnostic imaging | |
CN109754869A (en) | The rendering method and system of the corresponding coloring descriptor of the ultrasound image of coloring | |
Oosterloo | Visualisation of radio data | |
US20230255692A1 (en) | Technique for optical guidance during a surgical procedure | |
Chapman et al. | Constructing Real-Time Immersive Marine Environments for the Visualization of Underwater Archaeological Sites | |
Ostnes | Use of Depth Perception for the Improved Understanding of Hydrographic Data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |