WO2017020256A1 - 三维超声流体成像方法及系统 - Google Patents
三维超声流体成像方法及系统 Download PDFInfo
- Publication number
- WO2017020256A1 WO2017020256A1 PCT/CN2015/086068 CN2015086068W WO2017020256A1 WO 2017020256 A1 WO2017020256 A1 WO 2017020256A1 CN 2015086068 W CN2015086068 W CN 2015086068W WO 2017020256 A1 WO2017020256 A1 WO 2017020256A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- ultrasonic
- dimensional
- velocity vector
- fluid
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/06—Measuring blood flow
Definitions
- the invention relates to a fluid information imaging display technology in an ultrasound system, in particular to a three-dimensional ultrasound fluid imaging method and an ultrasound imaging system.
- Color Doppler blood flow meter is the same as pulse wave and continuous wave Doppler, and is also realized by Doppler effect between red blood cells and ultrasonic waves.
- Color Doppler flowmeter includes two-dimensional ultrasound imaging system, pulse Doppler (one-dimensional Doppler) blood flow analysis system, continuous wave Doppler blood flow measurement system and color Doppler (two-dimensional Doppler) Blood flow imaging system.
- the oscillator generates two orthogonal signals with a phase difference of ⁇ /2, which are respectively multiplied by the Doppler blood flow signal, and the product is converted into a digital signal by an analog/digital (A/D) converter, and filtered by a comb filter. After removing the low frequency component generated by the blood vessel wall or the valve, it is sent to the autocorrelator for autocorrelation detection. Since each sample contains Doppler blood flow information generated by many red blood cells, a mixed signal of multiple blood flow velocities is obtained after autocorrelation detection.
- the autocorrelation test result is sent to the speed calculator and the variance calculator to obtain an average speed, and is stored in the digital scan converter (DSC) together with the FFT-processed blood flow spectrum information and the two-dimensional image information.
- DSC digital scan converter
- the output displays the two-way parallax image data.
- a three-dimensional ultrasonic fluid imaging method comprising:
- the output displays the two-way parallax image data such that the cluster body exhibits a roll-over visual effect that changes with time as the output is displayed.
- a three-dimensional ultrasound fluid imaging system comprising:
- a receiving circuit and a beam combining module configured to receive an echo of the bulk ultrasonic beam to obtain a bulk ultrasonic echo signal
- a data processing module configured to acquire, according to the bulk ultrasound echo signal, three-dimensional ultrasound image data of at least a portion of the scan target, and obtain a fluid velocity of a target point in the scan target based on the volume ultrasound echo signal Vector information
- a 3D image processing module configured to mark a fluid velocity vector information of the target point in the three-dimensional ultrasonic image data to form a fluid velocity vector identifier, and obtain volume image data including a fluid velocity vector identifier;
- a parallax image generating module configured to convert the volume image data into two-way parallax image data
- a display display device is configured to receive and display the two-way parallax image data.
- a three-dimensional ultrasound fluid imaging system comprising:
- a receiving circuit and a beam combining module configured to receive an echo of the bulk ultrasonic beam to obtain a bulk ultrasonic echo signal
- a data processing module configured to obtain, according to the volume ultrasound echo signal, enhanced three-dimensional ultrasound image data of at least a portion of the scan target by a gray-scale blood flow imaging technique
- a 3D image processing module for segmenting the enhanced three-dimensional ultrasound image data for characterizing the stream a region of interest of the body region, obtaining a cluster-like cluster body block, marking the cloud-shaped cluster body region block in the three-dimensional ultrasonic image data, and obtaining volume image data including the cloud-like cluster body;
- a parallax image generating module configured to convert the volume image data into two-way parallax image data
- the display screen display device is configured to output and display the two-way parallax image data, so that the cluster body exhibits a roll-over visual effect that changes with time when outputting the display.
- the invention provides an ultrasonic fluid imaging method and system based on 3D display technology, which can realize the observation effect of a 3D ultrasonic image through a display screen by means of a human eye, and can fully display the fluid motion condition during display, and provide more to the observer. Observational perspective.
- FIG. 1 is a block diagram showing an ultrasonic imaging system according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of a vertically emitted planar ultrasonic beam according to an embodiment of the present invention
- FIG. 3 is a schematic diagram of a deflected-emitting planar ultrasonic beam according to an embodiment of the present invention
- FIG. 4 is a schematic diagram of a focused ultrasonic beam according to an embodiment of the present invention.
- Figure 5 is a schematic view showing a diverging ultrasonic beam in an embodiment of the present invention.
- FIG. 6(a) is a schematic diagram of a two-dimensional array probe array element
- FIG. 6(b) is a schematic diagram of a three-dimensional image scanning using a two-dimensional array probe along a certain ultrasonic propagation direction according to the present invention
- FIG. 6(c) is a diagram.
- 6(b) is a schematic diagram of the measurement of the relative offset of the scanning body;
- FIG. 7(a) is a schematic diagram of a two-dimensional array probe array element partition according to an embodiment of the present invention
- FIG. 7(b) is a schematic diagram of a body focused ultrasonic wave emission according to an embodiment of the present invention
- FIG. 8(a) is a flow chart showing a method for displaying a velocity vector identification according to an embodiment of the present invention
- FIG. 8(b) is a flow chart showing a method for displaying a cluster body according to an embodiment of the present invention
- FIG. 9 is a schematic flow chart of a method according to an embodiment of the present invention.
- FIG. 10 is a schematic flow chart of a method according to an embodiment of the present invention.
- Figure 11 (a) is a schematic diagram of calculation of fluid velocity vector information in a first mode in one embodiment of the present invention
- Figure 11 (b) is a fluid velocity vector information meter in a second mode in one embodiment of the present invention Calculation diagram
- Figure 12 (a) is a schematic view showing two ultrasonic propagation directions in one embodiment of the present invention.
- Figure 12 (b) is a schematic diagram of the synthesis of fluid velocity vector information based on Figure 12 (a);
- Figure 12 (c) is a schematic diagram of a spot calculation fluid velocity vector in one of the embodiments of the present invention.
- Figure 12 (d) is a schematic diagram of an 8-point interpolation method in one embodiment of the present invention.
- FIG. 13(a) is a first schematic diagram showing the effect of volume image data in one embodiment of the present invention.
- FIG. 13(b) is a schematic diagram showing a second effect of volume image data in one embodiment of the present invention.
- FIG. 14 is a schematic diagram of a third effect of volume image data in one embodiment of the present invention.
- FIG. 15 is a schematic structural diagram of a spatial stereoscopic display device according to an embodiment of the present invention.
- FIG. 16 is a schematic structural view of a spatial stereoscopic display device according to an embodiment of the present invention.
- FIG. 17 is a schematic structural view of a spatial stereoscopic display device according to an embodiment of the present invention.
- FIG. 18 is a schematic diagram showing the effect of the volume image data based on the first mode in one embodiment of the present invention.
- FIG. 19 is a schematic diagram showing the effect of the volume image data based on the second mode in one embodiment of the present invention.
- FIG. 20 is a schematic diagram of a third effect of volume image data in one embodiment of the present invention.
- FIG. 21(a) is a schematic view showing an imaging effect of a cloud-like cluster body in one embodiment of the present invention
- FIG. 21(b) is a cloud-like cluster body superimposed blood flow velocity vector marker in one embodiment of the present invention
- FIG. 21(c) is a schematic diagram showing the effect of superimposing color information of cloud-like clusters in one embodiment of the present invention
- 22 is a schematic diagram showing an effect of selecting a target point to form a trajectory according to an embodiment of the present invention
- 23 is a schematic diagram of converting body image data into two-way parallax images according to an embodiment of the present invention.
- 24 is a schematic diagram of converting body image data into two-way parallax images according to another embodiment of the present invention.
- 25 is a schematic structural diagram of a human-machine interaction mode according to an embodiment of the present invention.
- FIG. 26 is a schematic diagram of performing parallax image conversion using a virtual camera according to an embodiment of the present invention.
- FIG. 27(a) is a view showing an effect of displaying a roll-like cluster body over time in a virtual 3D ultrasound image observed by a naked eye when two-way parallax image output is displayed in an embodiment of the present invention
- Fig. 27 (b) is a view showing an effect of displaying a flow-like blood flow velocity vector mark with time when a two-way parallax image output display is displayed in the virtual 3D ultrasonic image observed by the naked eye in one embodiment of the present invention.
- the ultrasonic imaging system generally includes: a probe 1, a transmitting circuit 2, a transmitting/receiving selection switch 3, a receiving circuit 4, a beam combining module 5, a signal processing module 6, an image processing module 7, and a display screen display device. 8.
- the transmitting circuit 2 transmits a delayed-focused transmission pulse having a certain amplitude and polarity to the probe 1 through the transmission/reception selection switch 3.
- the probe 1 is excited by a transmitting pulse to transmit an ultrasonic wave to a scanning target (for example, an organ, a tissue, a blood vessel, or the like in a human body or an animal body, not shown), and receives a reflection from the target area after a certain delay.
- the ultrasound echo of the target information is scanned and the ultrasound echo is reconverted into an electrical signal.
- the receiving circuit receives the electrical signal generated by the conversion of the probe 1 to obtain a bulk ultrasonic echo signal, and sends the bulk ultrasonic echo signals to the beam combining module 5.
- the beam synthesizing module 5 performs focus delay, weighting, channel summation and the like on the bulk ultrasonic echo signal, and then sends the bulk ultrasonic echo signal to the signal processing module 6 for related signal processing.
- the bulk ultrasonic echo signal processed by the signal processing module 6 is sent to the image processing module 7.
- the image processing module 7 performs different processing on the signals according to different imaging modes required by the user, and obtains image data of different modes, for example, two-dimensional image data, and three-dimensional ultrasonic image data. Then, through the logarithmic compression, dynamic range adjustment, digital scan conversion and other processes to form different patterns of ultrasound image data, such as B image, C image, D image and other two-dimensional image data, and can be sent to the display device for 3D image or 3D Three-dimensional ultrasound image data displayed by a stereoscopic image.
- the image processing module 7 sends the generated three-dimensional ultrasound image data into the 3D image processing module 11
- the volume image data is obtained by performing processing such as marking, division, and the like, and the volume image data is a single-frame image or a multi-frame image having volume pixel information.
- the volume image data is obtained by the parallax image generating module 12, and the two-way parallax image data is displayed on the display screen display device 8.
- the display screen display device 8 utilizes the left and right eye parallax of the human eye based on the 3D display technology.
- the human eye reconstructs the image displayed on the display screen display device 8 to obtain a 3D stereoscopic image of the virtual scan target (hereinafter referred to as a 3D ultrasonic image).
- the display screen display device 8 is divided into two types: a glasses type display device and a naked eye type display device.
- the glasses type display device is realized by using a flat display screen together with 3D glasses.
- the naked-eye display device that is, the naked-eye 3D display, consists of three parts: 3D stereoscopic terminal, playback software, production software, and application technology. It is a modern high-tech that integrates optics, photography, electronic computers, automatic control, software, and 3D animation. Technology integrated in the three-dimensional reality system.
- the signal processing module 6 and the image processing module 7 can be implemented by using one processor or multiple processors.
- the 3D image processing module 11 can also integrate with the signal processing module 6 and the image processing module 7 to utilize one processor or multiple processing.
- the device is implemented, or a separate processor is set up to implement the 3D image processing module 11.
- the parallax image generating module 12 described above may be implemented by a software-only program, or may be implemented by using hardware in combination with a software program, which will be specifically described below.
- Probe 1 typically includes an array of multiple array elements. Each time the ultrasound is transmitted, all of the array elements of the probe 1 or a portion of all of the array elements participate in the transmission of the ultrasonic waves. At this time, each of the array elements or each of the array elements participating in the ultrasonic transmission are respectively excited by the transmitting pulse and respectively emit ultrasonic waves, and the ultrasonic waves respectively emitted by the array elements are superimposed during the propagation, and the formation is transmitted to The synthetic ultrasonic beam of the target is scanned, and the direction of the synthesized ultrasonic beam is the ultrasonic propagation direction mentioned herein.
- the array elements participating in the ultrasonic transmission may be excited by the transmitting pulse at the same time; or, there may be a certain delay between the time when the array elements participating in the ultrasonic transmission are excited by the transmitting pulse.
- the propagation direction of the above-described synthetic ultrasonic beam can be changed by controlling the delay between the time at which the element participating in the transmission of the ultrasonic wave is excited by the emission pulse, which will be specifically described below.
- the ultrasonic waves emitted by the respective array elements participating in the transmission of the ultrasonic waves do not converge during propagation.
- the coke does not completely diverge, but forms a plane wave that is generally planar as a whole. In this paper, this non-focal plane wave is called a "planar ultrasonic beam.”
- the ultrasonic beams emitted by the respective array elements can be superimposed at predetermined positions, so that the intensity of the ultrasonic waves is maximum at the predetermined position, that is,
- the ultrasonic waves emitted by the respective array elements are "focused" to the predetermined position, the predetermined position of the focus being referred to as the "focus", such that the resulting synthesized ultrasonic beam is a beam focused at the focus, referred to herein as " Focus on the ultrasound beam.”
- Figure 4 is a schematic diagram of a focused focused ultrasound beam.
- the array elements participating in the transmission of the ultrasonic waves in FIG.
- the ultrasonic waves emitted by each element are focused at the focus to form a focused ultrasound beam.
- the ultrasonic waves emitted by the respective array elements participating in the emission of the ultrasonic waves are diverged during the propagation, forming a substantially divergent overall. wave.
- the ultrasonic wave of this divergent form is referred to as a "divergent ultrasonic beam.”
- a plurality of array elements arranged linearly are simultaneously excited by an electric pulse signal, and each array element simultaneously emits ultrasonic waves, and the propagation direction of the synthesized ultrasonic beam is consistent with the normal direction of the array plane of the array elements.
- the plane wave of the vertical emission at this time, there is no time delay between the respective array elements participating in the transmission of the ultrasonic wave (that is, there is no delay between the time when each array element is excited by the emission pulse), and each array element is The firing pulse is simultaneously excited.
- the generated ultrasonic beam is a plane wave, that is, a plane ultrasonic beam, and the propagation direction of the plane ultrasonic beam is substantially perpendicular to the surface of the probe 1 from which the ultrasonic wave is emitted, that is, the propagation direction of the synthesized ultrasonic beam and the normal direction of the arrangement plane of the array element The angle between them is zero degrees.
- each array element sequentially emits an ultrasonic beam according to the time delay
- the propagation direction of the synthesized ultrasonic beam and the normal direction of the array element arrangement plane are With a certain angle, that is, the deflection angle of the combined beam, changing the above time delay, the magnitude of the deflection angle of the combined beam and the deflection in the normal direction of the array plane of the array element can be adjusted.
- Figure 3 shows the plane wave of the deflected emission.
- the generated ultrasonic beam is a plane wave, that is, a plane ultrasonic beam, and the propagation direction of the plane ultrasonic beam is at an angle to the normal direction of the array arrangement plane of the probe 1 (for example, the angle a in FIG. 3), and the angle is The angle of deflection of the ultrasonic beam of the plane.
- the direction and the element of the combined beam can be adjusted by adjusting the delay between the time when the array element participating in the transmission of the ultrasonic wave is excited by the transmitted pulse.
- the "deflection angle" of the combined beam formed between the normal directions of the planes, which may be the planar ultrasonic beam, the focused ultrasonic beam or the divergent ultrasonic beam mentioned above, and the like.
- each area array probe is regarded as a plurality of array elements 112 arranged in two directions, which correspond to the area array probe.
- Each array element is configured with a corresponding delay control line for adjusting the delay of each array element, and the ultrasonic beam can be performed by changing the delay time of each array element during the process of transmitting and receiving the ultrasonic beam. Sound beam control and dynamic focusing, thereby changing the direction of the propagation direction of the synthesized ultrasonic beam, and realizing the scanning of the ultrasonic beam in the three-dimensional space to form a stereoscopic three-dimensional ultrasonic image database. As shown in FIG.
- the array probe 1 includes a plurality of array elements 112.
- the emitted bulk ultrasonic beam can be indicated by the dotted arrow F51.
- the direction is propagated, and a scanning body A1 for acquiring three-dimensional ultrasonic image data (a three-dimensional structure drawn by a broken line in FIG. 6(b)) is formed in a three-dimensional space, and the scanning body A1 is opposite to the reference body A2 (FIG.
- the three-dimensional structure drawn by the solid line has a predetermined offset
- the reference body A2 is: an ultrasonic beam emitted by the element participating in the ultrasonic wave, and a normal line along the plane of the array element (the solid in FIG. 6(b))
- the scanning body A2 which is propagated in the direction of the line arrow F52) and formed in a three-dimensional space. It can be seen that the above-mentioned scanning body A1 has an offset with respect to the reference body A2 for measuring the deflection angle of the scanning body formed by propagating in different ultrasonic propagation directions and in a three-dimensional space with respect to the reference body A2.
- the amount can be combined by the following two angles: First, in the scanning body, the scanning plane A21 formed by the ultrasonic beam (Fig. 6(b) The direction of propagation of the ultrasonic beam on the quadrilateral drawn by the dotted line has a predetermined deflection angle ⁇ , and the deflection angle ⁇ is selected within the range of [0, 90°); second, as shown in Fig. 6 (c In the plane rectangular coordinate system on the array element arrangement plane P1, the projection P51 from the X-axis counterclockwise rotation to the propagation direction of the ultrasonic beam on the array element arrangement plane P1 (the point in the plane P1 in Fig.
- the rotation angle ⁇ formed by the line where the line arrow is located is selected in the range of [0, 360°).
- the deflection angle ⁇ is zero, the above-described scanning body A1 has an offset of zero with respect to the reference body A2.
- the magnitude of the above-mentioned deflection angle ⁇ and the rotation angle ⁇ can be changed, thereby adjusting the offset of the scanning body A1 relative to the reference body A2, thereby realizing Different scanning bodies are formed along different ultrasonic propagation directions in a three-dimensional space.
- the emission of the above-mentioned scanning body can also be replaced by a probe combination structure arranged in an array by a line array probe, and the transmission method is the same.
- the volume ultrasonic echo signal returned by the scanner A1 corresponds to obtain the three-dimensional ultrasound image data B1
- the volume ultrasound echo signal returned by the scanner A2 corresponds to obtain the three-dimensional ultrasound image data B2.
- An ultrasonic beam that "transmits to the scanning target to propagate in the space in which the scanning target is located to form the above-described scanning body” is regarded herein as a bulk ultrasonic beam, which may include a collection of ultrasonic beams that are emitted one or more times. Then, according to the type of the ultrasonic beam, the plane ultrasonic beam "transmitted to the scanning target and propagated in the space in which the scanning target is located to form the above-described scanning body" is regarded as a body plane ultrasonic beam, "the scanning target is emitted to the scanning target.
- a focused ultrasonic beam propagating in the space to form the above-described scanning body is regarded as a body-focused ultrasonic beam, and a divergent ultrasonic beam that is "transmitted to a scanning target and propagated in a space in which the scanning target is located to form the above-described scanning body" is regarded as a body divergence.
- the ultrasonic beam, and the like, the bulk ultrasonic beam may include a body plane ultrasonic beam, a body focused ultrasonic beam, a body divergent ultrasonic beam, etc., and so on, and a type name of the ultrasonic beam may be referred to between the "body” and the "ultrasonic beam".
- the body plane ultrasonic beam usually covers almost the entire imaging area of the probe 1, so that when the body plane ultrasonic beam is used for imaging, one frame of the three-dimensional ultrasound image can be obtained with one shot, so the imaging frame rate can be high.
- volume-focused ultrasound beam imaging because the beam is focused at the focus, only one or a few scan lines can be obtained in each scan, and multiple scans are required to obtain all the scan lines in the imaged area, thus combining all The scan line obtains a three-dimensional ultrasound image of the imaged area. Therefore, the frame rate is relatively low when using volume focused ultrasound beam imaging.
- the ability of the body focused ultrasound beam to be emitted each time is concentrated, and imaging is only performed at the concentration of the function, so that the obtained echo signal has a high signal-to-noise ratio, and can be used to obtain better quality tissue image ultrasonic measurement data.
- the present invention Based on the ultrasonic three-dimensional imaging technology and the 3D display technology, the present invention provides a better viewing angle for the user by superimposing the 3D ultrasonic image and the fluid velocity vector information of the fluid, and can realize the real-time understanding of the scanning position.
- the blood flow rate and flow information such as flow information, and also allows the human eye to observe a more stereoscopic, near-realistic virtual 3D ultrasound image, and stereoscopically reproduce the flow path information of the fluid flow.
- the fluids referred to herein may include: body fluids such as blood flow, intestinal fluid, lymph fluid, tissue fluid, and cell fluid.
- the embodiment provides a three-dimensional ultrasonic fluid imaging method, which is based on a three-dimensional ultrasound imaging technology, and displays an ultrasound image on a display screen through a 3D display technology, and reproduces stereoscopic and approximate life through observation by a human eye.
- the 3D imaging effect can provide users with a better viewing angle, and provide users with more colorful and different visual display different from the traditional display mode, so that the real position of the scanning position can be clearly understood in real time, and the image can also be displayed.
- the effect is more realistic to reveal fluid information, providing medical staff with more comprehensive and accurate image analysis results, creating a new and more new three-dimensional imaging display method for fluid imaging display technology realized on ultrasound systems.
- FIG. 8(a) is a flow chart showing the display of a velocity vector in a three-dimensional ultrasonic fluid imaging method in one embodiment of the present invention
- FIG. 8(b) is a three-dimensional ultrasonic fluid in one embodiment of the present invention.
- the imaging method a schematic diagram of the flow of the cluster body is shown, and some of the steps are the same, and some steps may also be included in each other. For details, refer to the detailed description below.
- the transmitting circuit 2 excites the probe 1 to the scanning target emitter ultrasonic beam to propagate the bulk ultrasonic beam in the space in which the scanning target is located to form the scanning body as shown in FIG.
- the probe 1 is an area array probe, or may be a probe assembly structure arranged in an array by a line array probe, and the like. The combination of the area array probe or the array probe can ensure that the feedback data of one scanned body is obtained in time during the same scan, and the scanning is improved. Speed and imaging speed.
- the bulk ultrasonic beam emitted to the scanning target herein may include: a body focused ultrasonic beam, a body unfocused ultrasonic beam, a bulk virtual source ultrasonic beam, a bulk non-diffracting ultrasonic beam, a body divergent ultrasonic beam, or a body plane ultrasonic beam; At least one of the beams or a combination of at least two or more beams (the "above” herein includes the number, the same applies hereinafter).
- the “above” herein includes the number, the same applies hereinafter.
- embodiments of the present invention are not limited to the above several types of bulk ultrasonic beams.
- the scanning method of the body plane wave can save the scanning time of the three-dimensional ultrasound image and increase the imaging frame rate, thereby realizing the fluid velocity vector imaging of the high frame rate. Therefore, step S101 is included in step S100: the body plane ultrasonic beam is emitted toward the scanning target.
- step 201 receiving an echo of the body plane ultrasonic beam, a body plane ultrasonic echo signal may be obtained, and the body plane ultrasonic echo signal may be used to reconstruct the three-dimensional ultrasound image data, and/or calculate a target point within the scan target. Fluid velocity vector information.
- the fluid velocity vector information mentioned herein includes at least the velocity vector of the target point (ie, the velocity magnitude and the velocity direction), and the fluid velocity vector information may also include corresponding location information of the target point.
- the fluid velocity vector information may also include any other information about the velocity of the target point, such as acceleration information, etc., that may be obtained from the magnitude of the velocity and the direction of velocity.
- step 301 three-dimensional ultrasound image data of at least a portion of the scanning target is acquired according to the body plane ultrasonic echo signal; and in step S401, the target within the scanning target is obtained based on the body plane ultrasonic echo signal Point fluid velocity vector information.
- the scanning target may be a tubular tissue structure having a flowing substance such as an organ, a tissue, a blood vessel, or the like in a human body or an animal body
- the target point in the scanning target may be a point or a position of interest within the scanning target, which is usually expressed on the display screen.
- a corresponding position in the two-way parallax image data converted by the scan target-based volume image data displayed on the display device, the position based on the image conversion mapping relationship may correspond to being markable or may be marked in the virtual 3D ultrasound image
- the displayed virtual space point or virtual space location may be a virtual space point or a neighborhood space range of a virtual space point, the same below.
- the target point corresponds to a virtual space point or a virtual space position in the 3D ultrasound image.
- the target point corresponds to a corresponding mapping position on the display image of the display screen, that is, two paths.
- Parallax image data The neighborhood of the pixel or pixel should also correspond to the domain range of the body pixel or the body pixel in the three-dimensional ultrasound image data.
- the body focused ultrasonic beam may be propagated in a space in which the scanning target is located by focusing the ultrasonic beam to the scanning target emitter to form a scanning body, thereby receiving the focused fluorescent beam by receiving the body in step S200.
- Echo a volume-focused ultrasound echo signal can be obtained, which can be used to reconstruct three-dimensional ultrasound image data, and/or to calculate fluid velocity vector information of a target point within the scan target.
- step S101 and step S102 are included in step S100, that is, in step S101, a body plane ultrasonic beam is emitted to the scanning target for receiving the back of the body plane ultrasonic beam in step 201.
- the wave, the body plane ultrasonic echo signal can be obtained, and based on the body plane ultrasonic echo signal, the fluid velocity vector information of the target point within the scan target is obtained in step S401.
- the ultrasound beam is focused on the scanning target emitter for receiving the echo of the focused ultrasound beam in step 202, and the focused ultrasound echo signal can be obtained, and the focused ultrasound echo is obtained according to the volume in step S302.
- a signal is obtained to obtain three-dimensional ultrasound image data of at least a portion of the scan target.
- the volume-focused ultrasound echo signal can be used to reconstruct high-quality three-dimensional ultrasound image data to obtain better quality three-dimensional ultrasound image data as a background image for characterizing the tissue structure.
- step S100 two kinds of bulk ultrasonic beams are alternately emitted to the scanning target.
- a process of focusing the ultrasonic beam toward the scanning target emitter is inserted in the process of emitting the body plane ultrasonic beam to the scanning target, that is, steps S101 and S102 shown in FIG. 10 are alternately performed. This can ensure the synchronization of the acquisition of the image data of the two kinds of body ultrasound beams, and improve the accuracy of the fluid velocity vector information of the target point superimposed on the background image.
- the ultrasonic beam may be emitted to the scanning target according to the Doppler imaging technique, for example, to scan the target emitter ultrasonic wave along an ultrasonic propagation direction.
- the beam is caused to propagate in the space in which the scanning target is located to form a scanning body.
- the three-dimensional ultrasound image data used to calculate the target point fluid velocity vector information is then acquired based on the bulk ultrasonic echo signals fed back from the one of the scanned bodies.
- the ultrasound beam may be emitted toward the scanning target in a plurality of ultrasonic propagation directions to form a plurality of scanning bodies, wherein each The scanning body is derived from a bulk ultrasonic beam emitted in the direction of propagation of the ultrasonic waves.
- Image data for calculating target point fluid velocity vector information is acquired based on the bulk ultrasonic echo signals fed back from the plurality of scan bodies. For example, in step S200 and step S400, it is included:
- the velocity vector of the target point is synthesized, and the fluid velocity vector information of the target point is generated.
- Multiple ultrasonic propagation directions include more than two ultrasonic propagation directions, and "above” includes the number, the same below.
- the process of scanning the target ultrasonic beam to the target object may be alternately performed in accordance with the difference in the ultrasonic wave propagation direction. For example, if the ultrasonic beam is irradiated toward the scanning target in two ultrasonic wave propagation directions, the ultrasonic beam is first scanned in the first ultrasonic wave propagation direction, and then the ultrasonic wave beam is scanned toward the scanning target emitter in the second ultrasonic wave propagation direction. , complete a scan cycle, and finally repeat the above scan cycle process.
- the ultrasonic beam may be firstly scanned in the direction of one ultrasonic wave, and then the ultrasonic beam is scanned in the direction of the other ultrasonic wave, and the scanning process is completed after all the ultrasonic directions are sequentially executed.
- it can be obtained by changing the delay time of each array element or each partial array element in the array elements participating in the ultrasonic transmission, and specifically refer to FIG. 2 to FIG. 6(a)-FIG. 6(c). )explanation of.
- the process of emitting a body plane ultrasonic beam toward the scanning target along a plurality of ultrasonic wave propagation directions may include: transmitting a first bulk ultrasonic beam to the scanning target, the first bulk ultrasonic beam having a first ultrasonic wave propagation direction; and transmitting to the scanning target a second bulk ultrasonic beam, the second bulk ultrasonic beam having The second ultrasonic wave propagation direction.
- the first bulk ultrasonic beam and the second bulk ultrasonic beam may be planar ultrasonic beams, and the corresponding first bulk ultrasonic echo signals and second bulk ultrasonic echo signals are changed to first body plane ultrasonic echoes. Signal and second body plane ultrasound echo signals.
- the process of transmitting the body plane ultrasound beam to the scanning target along the plurality of ultrasonic wave propagation directions may further include: scanning the target object emitter ultrasonic beam along the N (N takes any natural number greater than or equal to 3) ultrasonic wave direction, In order to receive the echo of the ultrasonic beam of the body, N sets (N is any natural number greater than or equal to 3) bulk ultrasonic echo signals are obtained, and each set of ultrasonic echo signals is derived from a bulk ultrasonic wave emitted in an ultrasonic propagation direction. .
- This N-group ultrasonic echo signal can be used to calculate fluid velocity vector information at the target point.
- the ultrasonic beam may be propagated in the space in which the scanning target is located by exciting some or all of the ultrasonic transmitting elements along the one or more ultrasonic wave propagation directions.
- the bulk ultrasonic beam in this embodiment may be a body plane ultrasonic beam.
- some or all of the array regions may be excited by dividing the ultrasonic emission array elements into a plurality of array element regions 111.
- the ultrasonic beam is emitted toward the scanning target along one or more ultrasonic wave propagation directions, and the bulk ultrasonic beam propagates in the space where the scanning target is located to form a scanning body, wherein each scanning body is derived from a body emitted in the ultrasonic propagation direction.
- Ultrasonic beam For the formation principle of the scanning body, reference may be made to the detailed descriptions of FIGS. 6(a) to 6(c) in the foregoing, which are not described herein.
- the bulk ultrasonic beam in the present embodiment may include one of a body focused ultrasonic beam, a body plane ultrasonic beam, and the like, but is not limited to the types of ultrasonic beams.
- the ultrasonic emission array element can be divided into a plurality of array element regions, and one of the array element regions can be excited to generate a focused ultrasonic beam while exciting the multi-array array.
- multiple focused ultrasound beams can be generated at the same time to form a body cluster. Focus the ultrasonic beam to obtain a scanned object. As shown in Fig. 7(a) and Fig.
- each of the array elements 111 is used to generate at least one focused ultrasonic beam (the arc with an arrow in the figure), thus
- the plurality of array elements 111 are simultaneously excited to generate the focused ultrasonic beam, the plurality of focused ultrasonic beams can be propagated in the space where the scanning target is located to form a scanning body 11 formed by the body focused ultrasonic beam, and the scanning body 11 is located in the same plane.
- the inner focused ultrasound beam forms a scanning plane 113 (shown by solid arrows in the figure, each solid arrow indicates a focused ultrasound beam), and the scanning body 11 can also be considered to be composed of a plurality of scanning planes 113.
- the orientation of the focused ultrasonic beam can be changed, thereby changing the propagation direction of the plurality of focused ultrasonic beams in the space in which the scanning target is located.
- a plurality of bulk ultrasonic beams are emitted to the scanning target along each ultrasonic propagation direction to obtain a plurality of bulk ultrasonic echo signals for subsequent processing of ultrasonic image data for the bulk ultrasonic echo signals.
- a plurality of body plane ultrasonic beams are respectively emitted to the scanning target in a plurality of ultrasonic wave propagation directions, or a plurality of body focused ultrasonic beams are respectively emitted to the scanning target along one or more ultrasonic wave propagation directions.
- each time the emission of the bulk ultrasonic beam corresponds to obtaining a bulk ultrasonic echo signal.
- the process of transmitting a plurality of bulk ultrasonic beams to the scanning target is alternately performed according to the direction of the ultrasonic wave propagation, so that the obtained echo data can calculate the velocity vector of the target point at the same time, and the calculation accuracy of the fluid velocity vector information is improved. For example, if N-shot ultrasonic beams are respectively emitted to the scanning target along three ultrasonic propagation directions, at least one bulk ultrasonic beam may be first transmitted to the scanning target along the first ultrasonic propagation direction, and then scanned along the second ultrasonic propagation direction.
- the target emits at least one bulk ultrasonic beam, and then transmits at least one bulk ultrasonic beam to the scanning target along the third ultrasonic propagation direction to complete one scanning cycle, and finally repeats the scanning cycle process until all scanning times in the ultrasonic propagation direction are completed.
- the number of times the bulk ultrasonic beam is emitted in different ultrasonic propagation directions in the same scanning period may be the same or different. For example, if it is an emitter ultrasonic beam along two ultrasonic propagation directions, then according to A1 B1 A2 B2 A3 B3 A4 B4 ... Ai Bi, and so on. Where Ai is the ith emission in the first ultrasonic propagation direction; Bi is the ith emission in the second ultrasonic propagation direction.
- the bulk ultrasonic beam is in accordance with A1 B1 B2C1 A2 B3 B4C2 A3 B5 B6C3 ... Ai Bi Bi Ci, and so on.
- Ai is the ith emission in the first ultrasonic propagation direction
- Bi is the ith emission in the second ultrasonic propagation direction
- Ci is the ith emission in the third ultrasonic propagation direction.
- the above step S100 includes:
- a plurality of body focused ultrasound beams are transmitted to the scanning target to acquire reconstructed three-dimensional ultrasound image data;
- a plurality of body plane ultrasonic beams are transmitted to the scanning target along one or more ultrasonic propagation directions for acquiring image data for calculating a target point velocity vector.
- a process of focusing the ultrasonic beam toward the scanning target emitter can be inserted in the process of emitting the body plane ultrasonic beam to the scanning target.
- the multiple-body focused ultrasonic beam emitted to the scanning target is uniformly inserted into the emission process of performing the above-described multiple body plane ultrasonic beam.
- the above-described continuous "Ai Bi Ci" body plane ultrasonic beam emission process is mainly directed to data for obtaining velocity information of a calculation target point, and for another type of bulk ultrasound beam for acquiring a reconstructed three-dimensional ultrasound image.
- the emission is performed by inserting into the above-mentioned continuous "Ai Bi Ci" emission process, and the following is to insert a plurality of body-focused ultrasonic beams to the scanning target in the above-mentioned continuous "Ai Bi Ci" body plane ultrasonic beam emission process.
- a detailed explanation of the way in which two types of beams are alternately transmitted is explained.
- Ai is the ith emission in the first ultrasonic propagation direction
- Bi is the ith emission in the second ultrasonic propagation direction
- Ci is the ith emission in the third ultrasonic propagation direction
- Di is the first The i-subject focuses the emission of the ultrasonic beam.
- the emission of the first-body focused ultrasonic beam may be inserted after the plurality of body-plane ultrasonic beams are emitted in different ultrasonic propagation directions, or at least a portion of the plurality of body-plane ultrasonic beams are transmitted to the scanning target and the above-mentioned scanning is performed.
- the target emits a plurality of body-focusing at least a portion of the ultrasonic beam alternately performed, and the like.
- the volume-focused ultrasonic beam can be used to obtain high-quality three-dimensional ultrasound image data; and the high-real-time fluid velocity vector information can be obtained by using the high-body plane beam rate of the body plane, and in order to acquire both in data acquisition.
- two types of ultrasonic-shaped alternating emission are used.
- the order and rules of execution of transmitting a plurality of bulk ultrasonic beams to the scanning target along different ultrasonic propagation directions can be arbitrarily selected, and are not enumerated here, but are not limited to the specific embodiments provided above.
- step S200 the receiving circuit 4 and the beam combining module 5 receive the echo of the bulk ultrasonic beam emitted in the above step S100 to obtain a bulk ultrasonic echo signal.
- step S200 which type of bulk ultrasonic beam is used in the above step S100, then the echo of the corresponding type of bulk ultrasonic wave is generated in step S200 to generate a corresponding type of bulk ultrasonic echo signal.
- a volume focused ultrasound echo signal is obtained;
- a body plane ultrasound echo signal is obtained,
- the type name of the ultrasonic beam is given between "body” and "ultrasonic echo signal”.
- the receiving and receiving functions can be received by using each of the array elements participating in the ultrasonic transmission or each of the array elements in time division.
- the echo of the bulk ultrasonic beam emitted in the above step S100, or dividing the array element on the probe into the receiving portion and the transmitting portion, and then receiving the above step S100 by using each of the array elements or each of the array elements participating in the ultrasonic receiving The echo of the emitted bulk ultrasonic beam, and so on.
- the reception of the bulk ultrasound beam and the acquisition of the bulk ultrasound echo signal can be found in the manner conventional in the art.
- step S200 Receiving an echo of the body ultrasonic beam, corresponding to obtaining a set of bulk ultrasonic echo signals.
- a set of bulk ultrasonic echo signals are obtained in step S200, correspondingly in steps S300 and S400, according to the corresponding a set of bulk ultrasonic echo signals respectively acquiring three-dimensional ultrasound image data of at least a portion of the scan target and fluid velocity vector information of the target point; and receiving a bulk ultrasonic beam emitted to the scan target along the plurality of ultrasonic propagation directions in step S200
- the echoes are obtained in step S200, wherein each set of ultrasonic echo signals is derived from an echo of a bulk ultrasonic beam emitted in an ultrasonic propagation direction.
- step S300 and step S400 three-dimensional ultrasonic image data of at least a part of the scanning target is acquired according to the one set of the ultrasonic echo signals, and the fluid of the target point can be acquired by the plurality of sets of ultrasonic echo signals.
- Speed vector information is acquired.
- the echo of the bulk ultrasonic beam is received in step S200, and the corresponding set of ultrasonic echo signals includes a plurality of bulk ultrasonic echo signals.
- the emission of the primary ultrasonic beam corresponds to obtaining the primary ultrasonic echo signal.
- each group of plane plane ultrasonic echo signals includes multiple body plane ultrasonic echo signals, and each body plane ultrasonic echo signal is derived from performing a scan target emitter along an ultrasonic propagation direction The echo obtained by the step of the plane ultrasonic beam.
- the echoes of the body focused ultrasound beams are received in step S200 to obtain a plurality of sets of focused ultrasound echo signals.
- step S100 what type of bulk ultrasonic beam is used to transmit the corresponding number of times in step S100, then the echo of the corresponding type of body ultrasonic beam is correspondingly received in step S200, and a corresponding type of bulk ultrasonic echo signal is generated.
- step S300 the image processing module 7 acquires three-dimensional ultrasound image data of at least a portion of the scan target based on the volume ultrasound echo signal.
- three-dimensional ultrasonic image data B1 and B2 as shown in FIG. 6(b) can be obtained, which can include: space The position information of the point and the image information corresponding to the space point, the image information includes other feature information such as a gray attribute and a color attribute of the spatial point.
- the three-dimensional ultrasound image data may be imaged using a body plane ultrasound beam or a volume focused ultrasound beam imaging.
- the obtained signal of the echo signal has a high signal-to-noise ratio, and the obtained three-dimensional ultrasound image data is of good quality, and the body of the focused ultrasound beam is focused.
- the stenosis is narrow and the side lobes are low, and the lateral resolution of the obtained three-dimensional ultrasound image data is also high. Therefore, in some embodiments of the present invention, the three-dimensional ultrasound image data of step S500 may be imaged using a volume focused ultrasound beam.
- a plurality of emitter-focused ultrasound beams may be emitted in step S100 to realize scanning to obtain one frame of three-dimensional ultrasound image data.
- the above-described three-dimensional ultrasonic image data is acquired according to the body plane ultrasonic echo signal obtained in the above-described step S200.
- a set of bulk ultrasonic echo signals may be selected to acquire three-dimensional ultrasonic image data of at least a part of the scan target.
- the image data optimized three-dimensional ultrasound image data is obtained based on the multi-set ultrasound echo signals.
- step S300 it may further include: step S310 in FIG. 8(b), according to the volume ultrasound echo signal, obtained by gray-scale blood flow imaging technology.
- step S310 is employed after step S200 in the dynamic display method for the pair of cluster bodies shown in FIG. 8(b).
- Gray-scale blood flow imaging technology or two-dimensional blood flow display technology is a new imaging technique that uses digital coded ultrasound technology to observe blood flow, blood vessels and surrounding soft tissue and display it in gray scale.
- the processing of the three-dimensional ultrasonic image data in the above embodiments can be understood as the three-dimensional data processing of the entire three-dimensional ultrasonic image database, and can also be understood as one or more of the two-dimensional ultrasonic image data contained in one frame.
- the image processing module 7 is configured to obtain fluid velocity vector information of the target point within the scan target based on the bulk ultrasonic echo signal obtained in the above step S200.
- the fluid velocity vector information mentioned herein includes the velocity vector of the target point (i.e., the velocity magnitude and the velocity direction), and/or the corresponding positional information of the target point in the three-dimensional ultrasound image data.
- the image mapping relationship of the three-dimensional ultrasonic image data converted into two-way parallax image data in step S600 according to the corresponding position information of the target point in the three-dimensional ultrasonic image data, the target point can be correspondingly obtained in the two-way parallax image data respectively. Corresponding location information.
- the corresponding position information of the target point in the three-dimensional ultrasonic image data is obtained according to the corresponding position information of the target point in the two-way parallax image data.
- the target point is available for the user to select, by using the human-machine interaction device to obtain an instruction input by the user, to set the distribution density of the target point within the scan target or the position of the target point (including selecting the location of the target point, Or to calculate the initial position of the target point fluid velocity vector). For example, selecting a distribution density by moving a cursor displayed in an image, or by gesture input, obtaining a distribution density instruction input by a user, randomly selecting the target point within the scan target according to a distribution density instruction; and/or, by moving The cursor displayed in the image, or the target point position is selected by gesture input, the mark position command input by the user is acquired, and the target point is obtained according to the mark position command.
- the target point includes a domain range or a data block of one or more discretely distributed volume pixels or body pixels, and the distribution density refers to a size that the target point may appear in a predetermined area range, and the predetermined area range may be the entirety of the scanning target.
- the stereoscopic region range may also be a partial region range of the scanning target, that is, a range in which the initial position is used when calculating the velocity vector of the target point in the second mode described below.
- the invention is not limited thereto.
- the position of the target point or the initial position of the fluid velocity vector of the target point may be randomly selected within the scan target according to a distribution density preset by the system. In this way, the user can be given a flexible choice to improve the user experience.
- step 400 The process of obtaining the fluid velocity vector information of the target point within the scanning target based on the bulk ultrasonic echo signal included in step 400 will be explained in detail below.
- the step S400 includes: calculating, according to the volume ultrasonic echo signal obtained in the above step S200, the fluid at the first display position in the three-dimensional ultrasonic image data of the target point at different times.
- the velocity vector is used to obtain fluid velocity vector information in the three-dimensional ultrasound image data of the target point at different times.
- it may be fluid velocity vector information at the first display position in the three-dimensional ultrasound image data for each time.
- the three-dimensional ultrasonic image data P1, P2, corresponding to the times t1, t2, ..., tn can be respectively obtained.
- the first display position of the target point in the three-dimensional ultrasound image data corresponding to each time is always located at the position (X1, Y1, Z1) in the three-dimensional ultrasound image data. Based on this, when the fluid velocity vector information is marked in the subsequent step S500, that is, the calculated fluid velocity vector is marked at different times at the position (X1, Y1, Z1) in the three-dimensional ultrasonic image data.
- the corresponding first display position can be obtained, and the first display position in the three-dimensional ultrasonic image data corresponding to the current time is calculated.
- the fluid velocity vector information is used to mark, and this display mode is referred to herein as the first mode, the same applies hereinafter.
- the step S400 includes: calculating a fluid velocity vector sequentially obtained by continuously moving the target point to the corresponding position in the three-dimensional ultrasonic image data according to the volume ultrasonic echo signal obtained in the above step S200. , thereby obtaining fluid velocity vector information of the target point.
- the step S400 by repeatedly calculating the fluid velocity vector of the target point moving from one position to another position of the three-dimensional ultrasound image data in a time interval, to obtain the target point continuously moving from the initial position after the three-dimensional ultrasound A corresponding fluid velocity vector at each corresponding location in the image data. That is to say, the calculation position for determining the fluid velocity vector in the three-dimensional ultrasonic image data of the present embodiment can be obtained by calculation.
- the marked may be the fluid velocity vector information at the position calculated in the three-dimensional ultrasonic image data corresponding to each time.
- the three-dimensional ultrasonic image data P11, P12, . . . corresponding to the times t1, t2, ..., tn can be respectively obtained.
- the initial position of the target point is determined by referring to part or all of the target point selected by the user or by the distribution density of the system default target point, as shown in FIG. 11(b).
- the middle point is the first point of (X1, Y1, Z1), and then the fluid velocity vector (as indicated by the arrow in P11) of the three-dimensional ultrasonic image data P11 of the initial position at time t1 is calculated.
- the calculation target point i.e., the black dot in the figure
- the ultrasonic echo signal obtains a fluid velocity vector at a position (X2, Y2, Z2) in the three-dimensional ultrasonic image data P12 for marking into the three-dimensional ultrasonic image data.
- the displacement at t2 so that a target point at the first time t1 is found at the second display position on the three-dimensional ultrasonic image data at the second time, and then according to the bulk ultrasonic echo signal obtained in the above step S200
- the fluid velocity vector at the second display position is obtained, thereby obtaining fluid velocity vector information of the three-dimensional ultrasonic image data P12 of the target point at time t2.
- the displacement of the two adjacent moments is obtained to obtain the displacement amount, and the target point is determined according to the displacement amount at the second moment.
- Corresponding position on the three-dimensional ultrasonic image data and then obtaining a fluid velocity vector at a corresponding position in the ultrasonic image of the target point moving from the first moment to the second moment according to the volume ultrasonic echo signal, thereby obtaining the target point from the three-dimensional ultrasound
- the blood flow fluid velocity vector information is continuously moved to (Xn, Yn, Zn), thereby obtaining the corresponding position in the three-dimensional ultrasonic image data in which the target point continuously moves from the initial position to the different time.
- the fluid velocity vector at which the fluid velocity vector information of the target point is acquired and marked into the three-dimensional ultrasonic image data for superimposed display.
- the movement displacement of the target point at a time interval is calculated, and the corresponding position of the target point in the three-dimensional ultrasonic image data is determined according to the displacement, and the movement is performed according to the time interval from the initially selected target point.
- a time interval can be determined by the system transmission frequency, and It is determined by the display frame rate, or may be the time interval input by the user, by calculating the position reached after the target point is moved according to the time interval input by the user, and then obtaining the fluid velocity vector information at the position for comparison display.
- the human interaction device can be used to select N initial target points or N initial target points can be set according to the system default distribution position or distribution density, and each initial target point can pass the set fluid velocity vector.
- step S500 the fluid velocity vector correspondingly obtained when the marker target point is continuously moved to the corresponding position in the three-dimensional ultrasonic image data may form a velocity vector identifier that changes in time, and the fluid velocity vector identifier may adopt any shape. Marked.
- the velocity vector identification can be presented with a flow-like visual effect that changes with time when the output is displayed, and the arrow of each target point will occur.
- This display mode is referred to as the second mode, the same as below.
- the following various manners may be adopted according to the bulk ultrasonic echo signal.
- a fluid velocity vector at a corresponding position in the three-dimensional ultrasound image data at any time in the target point of the scanning target is obtained.
- the blood flow fluid velocity vector information of the target point in the scanning target is calculated according to a set of bulk ultrasonic echo signals obtained by the ultrasonic beam of the emitter in an ultrasonic propagation direction in step S100.
- the fluid velocity vector of the target point at the corresponding position in the volume image data can be obtained by calculating the movement displacement and the moving direction of the target point within the preset time interval.
- the body plane ultrasonic echo signal can be used to calculate the fluid velocity vector information of the target point.
- the scan target is calculated based on a set of body plane ultrasonic echo signals. The displacement and direction of movement of the inner target point within a preset time interval.
- the method for calculating the fluid velocity vector of the target point at the corresponding position in the volume image data in this embodiment may use a method similar to speckle tracking, or may also be obtained by using Doppler ultrasound imaging method.
- the fluid velocity vector of the target point in the direction of the ultrasonic wave propagation, or the velocity component vector of the target point can also be obtained based on the time gradient and the spatial gradient at the target point, and the like.
- the process of obtaining a fluid velocity vector at a corresponding position in the three-dimensional ultrasonic image data of the target point within the scanning target according to the bulk ultrasonic echo signal may be Includes the following steps.
- At least two frames of three-dimensional ultrasound image data may be obtained according to the volume ultrasound echo signals obtained as described above, for example, at least a first frame of three-dimensional ultrasound image data and a second frame of three-dimensional ultrasound image data are obtained.
- a body plane ultrasonic beam can be used to acquire image data of a fluid velocity vector for calculating a target point.
- the plane ultrasonic beam propagates substantially throughout the imaging area. Therefore, a 2D area array probe is used to emit a set of body plane ultrasonic beams of the same angle, and after receiving 3D beam composite imaging, a frame of three-dimensional ultrasound image data can be obtained.
- the rate is 10,000, which is 10,000 times per second. After one second, 10,000 three-dimensional ultrasound image data can be obtained.
- the three-dimensional ultrasound image data of the scanning target obtained by correspondingly processing the body plane beam echo signals obtained by the body plane ultrasonic beam is referred to as “body plane beam echo image data”.
- a tracking stereo region is selected in the first frame of three-dimensional ultrasound image data, and the tracking stereo region may include a target point for which a velocity vector is desired.
- the tracking stereoscopic region may select a stereoscopic region of any shape centered on the target point, such as a cube region, a small cube region in FIG. 12(c).
- a stereoscopic region corresponding to the tracking stereoscopic region is searched for in the second frame of the three-dimensional ultrasonic image data, for example, a stereoscopic region having the greatest similarity with the aforementioned tracking stereoscopic region is searched as a tracking result region.
- the measure of similarity can use the metrics commonly used in the art.
- a measure of similarity can use a three-dimensional calculation model of the following formula correlation:
- X 1 is a first frame of three-dimensional ultrasound image data
- X 2 is a second frame of three-dimensional ultrasound image data
- i, j and k are the three-dimensional coordinates of the image. Indicates the values of A, B, and C when the result of the expression on the right side of it reaches a minimum.
- A, B and C represent the new location.
- M, N and L are the sizes of the tracking stereo regions.
- the velocity vector of the target point can be obtained.
- the velocity of the fluid velocity vector can be obtained by tracking the distance between the stereo region and the tracking result region (ie, the displacement of the target point within a preset time interval), dividing by the first frame body plane beam echo image data and The time interval between the two-frame body plane beam echo image data is obtained, and the velocity direction of the fluid velocity vector may be the direction from the tracking stereo region to the tracking result region (ie, the direction of the arrow in FIG. 12(c)) , that is, the moving direction of the target point within the preset time interval.
- wall filtering is performed on each of the obtained three-dimensional ultrasonic image data, that is, wall filtering is performed separately for each spatial position point on the three-dimensional ultrasonic image data in the time direction.
- the tissue signal on the three-dimensional ultrasound image data changes little with time, and the fluid signal such as the blood flow signal changes greatly due to the flow. Therefore, a high-pass filter can be used as a wall filter for fluid signals such as blood flow signals. After wall filtering, the higher frequency fluid signal is retained and the less frequent tissue signal is filtered out.
- the signal-to-noise ratio of the fluid signal can be greatly enhanced, which is beneficial to improve the calculation accuracy of the fluid velocity vector.
- the process of wall filtering the acquired three-dimensional ultrasound image data is equally applicable to other embodiments.
- a method for obtaining a velocity vector of a target point based on a temporal gradient and a spatial gradient at a target point includes:
- At least two frames of three-dimensional ultrasound image data are obtained according to the bulk ultrasound echo signal; or After performing wall filtering on the three-dimensional ultrasound image data, the following steps are performed.
- the gradient and the first velocity component respectively obtaining a second velocity component in a first direction at a target point and a third velocity component in a second direction, the first direction and the second direction And the direction of the ultrasonic wave is perpendicular to each other;
- the fluid velocity vector of the target point is synthesized according to the first velocity component, the second velocity component, and the third velocity component.
- the first direction and the second direction and the ultrasonic propagation direction are perpendicular to each other, and it can be understood that the three-dimensional coordinate system is constructed by using the ultrasonic propagation direction as a coordinate axis, for example, the ultrasonic propagation direction is the Z-axis, and the remaining first directions are The second direction is the X axis and the Y axis, respectively.
- It can be obtained by obtaining gradients in the X, Y and Z directions respectively for the three-dimensional ultrasound image data;
- the result can be obtained by grading the time direction of each spatial point on the three-dimensional ultrasound image data based on a plurality of three-dimensional ultrasound image data.
- the lower subscript i in the middle represents the calculation result of the gradient of the i-th three-dimensional ultrasonic image data in the X, Y, and Z directions, respectively.
- the parameter matrix A is formed based on the gradients along the three-dimensional coordinate axis at each spatial point calculated multiple times. Let a total of N calculations, and because the time occupied by these N calculations is very short, it is assumed that the fluid velocity remains constant during this time. ⁇ i represents a random error.
- the formula (3) satisfies the Gauss-Markov's theorem, and its solution is the following formula (4).
- the variance of the random error ⁇ i can be expressed as the following formula (5)
- the velocity values v z and their average values at different time points in the ultrasonic propagation direction (ie, the Z direction) at each spatial point are obtained according to the Doppler ultrasonic measurement method, and each spatial point is calculated.
- V D is a set of velocity values measured by Doppler ultrasound at different times
- v z in formula (6) is the average value obtained by Doppler ultrasound.
- the weighting coefficient O is a zero matrix
- I A and I B are unit matrices whose order corresponds to the number of rows of matrices A and B, respectively.
- the weighting coefficient is the square root of the reciprocal of the variance of the random error term in the linear error equation.
- the fluid velocity vector of the target point can be obtained using a Doppler ultrasound imaging method, as shown below.
- a plurality of ultrasonic beams are continuously emitted in the same ultrasonic propagation direction for the scanning target; the echoes of the multiple ultrasonic beams received are received, and multiple ultrasonic echo signals are obtained, and each ultrasonic echo is obtained.
- Each value in the wave signal corresponds to a value at a target point when scanning in an ultrasonic propagation direction; and in step S400 includes:
- the multiple-body ultrasonic echo signals are respectively subjected to Hilbert transform along the ultrasonic propagation direction. Or performing IQ demodulation on the echo signal.
- beam synthesis multiple sets of three-dimensional ultrasound image data representing the values of each target point are obtained; after N times of transmission and reception, there is an edge time at each target point position.
- the N complex values of the change and then, according to the following two formulas, calculate the speed of the target point z in the direction of ultrasonic propagation:
- Vz is the calculated velocity value along the direction of propagation of the ultrasonic wave
- c is the speed of sound
- f 0 is the center frequency of the probe
- T prf is the time interval between two shots
- N is the number of shots
- x(i) is The real part of the i-th shot
- y(i) is the imaginary part of the ith shot.
- the above formula is a formula for calculating the flow rate at a fixed position.
- the magnitude of the fluid velocity vector at each target point can be determined by the N complex values.
- the direction of the fluid velocity vector is the direction of ultrasonic wave propagation, that is, the direction of ultrasonic wave propagation corresponding to the plurality of bulk ultrasonic echo signals.
- Doppler processing is performed on the volume ultrasonic echo signal by using the Doppler principle, and the moving speed of the scanning target or the moving portion therein can be obtained.
- the motion velocity of the scanning target or the moving portion therein can be obtained from the volume ultrasound echo signal by the autocorrelation estimation method or the cross correlation estimation method.
- the method of performing Doppler processing on the bulk ultrasonic echo signal to obtain the velocity of motion of the scanning target or the moving portion thereof can be calculated using any ultrasonic wave signals that are currently used or may be used in the future. The method of scanning the moving speed of the target or the moving part therein is not described in detail herein.
- the present invention is not limited to the above two methods, and other methods known in the art or possible in the future may be employed.
- the new position reached by a calculation point is probably not the position to be calculated by the target point, and can be obtained by interpolation, for example, 8-point interpolation method.
- the gray point in the middle of the solid area is the point to be calculated, and the eight black points are the positions of the speed calculated per frame.
- each black point (the black point represents the position of each vertex of the solid area) and the distance of the gray point are obtained through the spatial connection, and then a weight list is obtained according to the distance.
- the speed at each black point is divided into Vx, Vy and Vz, and the three directions are perpendicular to each other. According to the speeds of the three black points in three directions, the speed values in the three directions on the red point are calculated according to the weight values. This will give you the speed and direction of the red dot.
- the 8-point interpolation method described above is based on a cubic region in which a solid region is used. Of course, interpolation calculations may be performed based on cube regions of other shapes, such as a regular tetrahedron, a regular octahedron, and the like.
- the corresponding interpolation calculation method is set by delineating the three-dimensional region structure of the target point domain space, thereby calculating the fluid velocity vector of the target point at the position to be calculated according to the fluid velocity vector at the new position where the calculation point arrives.
- a second mode according to the bulk ultrasonic beam emitted in the plurality of ultrasonic wave propagation directions in step S100, when a plurality of the scanning bodies are formed, echoes from the ultrasonic beams of the plurality of scanning bodies are received, and multi-group ultrasound is obtained.
- the echo signal calculates fluid velocity vector information of the target point in the scanning target according to the multi-set ultrasonic echo signal.
- a velocity vector of the target point in the scan target at the corresponding position in the three-dimensional ultrasonic image data is calculated, according to the plurality of groups.
- the volume ultrasonic echo signal acquires a plurality of velocity component vectors at the corresponding position; and then, according to the plurality of velocity component vectors, the fluid velocity vector of the target point at the corresponding position in the three-dimensional ultrasound image data is obtained.
- a body plane ultrasonic echo signal can be used to calculate a fluid velocity vector of a target point, and in some embodiments of the invention, based on a group of multiple sets of body plane ultrasound echo signals
- the plane ultrasonic echo signal calculates a velocity vector of the target point in the scanning target at a position, and acquires a plurality of velocity sub-vectors at the position according to the plurality of sets of body plane ultrasonic echo signals.
- the velocity division vector of the target point at the corresponding position is obtained by calculating the movement displacement and the moving direction of the target point within a preset time interval.
- the method for calculating the velocity division vector of the target point may use the method similar to the speckle tracking described above, or the Doppler ultrasound imaging method may be used to obtain the velocity division vector of the target point in the ultrasonic propagation direction.
- the blood flow velocity vector of the target point can be obtained based on the time gradient and the spatial gradient at the target point, and so on. For details, refer to the detailed explanation of the first method in the foregoing, which will not be repeated here.
- step S100 When there are two angles in step S100, the magnitude and direction of the fluid velocity of all the locations to be measured at one moment can be obtained after 2N shots; if there are three angles, 3N shots are required, and so on.
- Figure 12(a) shows the emission of A1 and B1 at two different angles. After 2N shots, the velocity and magnitude at the origin position in the graph can be calculated by velocity fitting. The speed fit is shown in Figure 12(b).
- VA and VB are the velocity-divided vectors of the target points at the corresponding positions, respectively, along the two ultrasonic propagation directions A1 and B1 in Fig. 12(a), and the target points are obtained at the corresponding positions by spatial velocity synthesis.
- the fluid velocity vector V at the location In Fig. 12(b), VA and VB are the velocity-divided vectors of the target points at the corresponding positions, respectively, along the two ultrasonic propagation directions A1 and B1 in Fig. 12(a), and the target points are obtained at the corresponding positions by
- the image data obtained by each shot can be reused, and the velocity vector can be calculated using the Doppler imaging method, thereby reducing the time interval between the magnitude and direction of the whole field fluid twice.
- the minimum time interval for the ultrasonic propagation direction is the time for two transmissions
- the minimum time interval for the three ultrasonic propagation directions is the time for three transmissions, and so on.
- step S100 When there are at least three ultrasonic propagation directions in step S100, at least three sets of beam echo signals for calculating at least three velocity division vectors, the corresponding at least three ultrasonic propagation directions are not in the same plane, and the calculation can be obtained.
- the fluid velocity vector is closer to the velocity vector in the real three-dimensional space, hereinafter referred to as the constraint on the direction of propagation of the ultrasonic wave.
- the ultrasonic beam may be emitted toward the scanning target along the N (3 ⁇ N) ultrasonic propagation directions, but in step S400, the target point is calculated at the corresponding position.
- calculation is performed each time using n velocity division vectors, where 3 ⁇ n ⁇ N. That is, in the above step 100, the ultrasonic beam may be emitted toward the scanning target in at least three ultrasonic propagation directions, wherein at least three adjacent ultrasonic propagation directions are not in the same plane.
- step S400 according to a process of calculating a velocity sub-vector of the target point in the scanning target based on a set of volume ultrasonic echo signals in the at least three sets of ultrasonic echo signals, respectively, when the target point is calculated at the corresponding position, Obtaining at least three blood flow velocity division vectors corresponding to at least three sets of ultrasonic echo signals continuously received, and synthesizing the fluid velocity of the target point at the corresponding position according to the velocity division vectors in the at least three ultrasonic propagation directions Vector.
- the ultrasonic beam may be emitted toward the scanning target in the N (3 ⁇ N) ultrasonic propagation directions, but in step S400, When calculating the fluid velocity vector of the above target point at the corresponding position, the calculation is performed each time using N velocity division vectors. That is, in the above step 100, the ultrasonic beam may be emitted toward the scanning target in at least three ultrasonic propagation directions, wherein the at least three ultrasonic propagation directions are not in the same plane.
- step S400 according to a set of volume ultrasonic echo signals in the at least three sets of ultrasonic echo signals obtained by the receiving, a process of calculating a velocity vector at the corresponding position of the target within the scanning target is respectively calculated.
- the respective velocity sub-vectors in all the ultrasonic propagation directions corresponding to the at least three sets of ultrasonic echo signals are synthesized, and the target points are synthesized according to the velocity division vectors in all the ultrasonic propagation directions.
- the fluid velocity vector at the corresponding location is a set of volume ultrasonic echo signals in the at least three sets of ultrasonic echo signals obtained by the receiving.
- the transmission element that drives the ultrasonic beam emission mentioned here realizes the deflection to change the direction of the ultrasonic emission, for example, each linear array probe or each of the transmitting array elements arranged in an array form is equipped with a corresponding driving.
- Control to uniformly adjust the deflection angle or delay of each probe or transmitting element in the driving probe combination structure,
- the scanning bodies formed by the bulk ultrasonic beams output by the probe assembly structure have different offsets, thereby obtaining different ultrasonic propagation directions.
- the number of ultrasonic propagation directions selected by the user may be obtained by configuring a user self-selection item on the display interface, or providing an option configuration button or the like, or selecting the above-mentioned step S400. Synthesizing the number of speed division vectors of the fluid velocity vector to generate command information; adjusting the number of ultrasonic propagation directions in the step S100 according to the command information, and determining the synthesis in the step S400 according to the number of the ultrasonic propagation directions The number of speeds of the fluid velocity vector, or the number of speed division vectors for synthesizing the fluid velocity vector of the target point at the corresponding position in the above step S400, to provide a more comfortable experience for the user, and more flexible Information extraction interface.
- the 3D image processing module 11 implements fluid velocity vector information marking the target point in the three-dimensional ultrasound image data to form a fluid velocity vector identifier, and obtains volume image data 900 including the fluid velocity vector identifier.
- the three-dimensional ultrasound image data can be collected in real time or non-real-time. If it is not real-time, it can realize the playback and tentative processing of the three-dimensional ultrasound image data.
- the enhanced three-dimensional ultrasound image data of at least a part of the scanning target is obtained by the gray-scale blood flow imaging technology in step S310
- the corresponding gray-scale feature or fluid velocity information obtained by the gray-scale blood flow imaging technology may also be Used to give an impression in an image displayed on a display device.
- the 3D image processing module 11 may segment the region of interest in the enhanced three-dimensional ultrasound image data to characterize the fluid region, obtain a cloud-like cluster body region block, and mark the cloud-like shape in the three-dimensional ultrasound image data.
- the cluster body block obtains volume image data including cloud-like clusters.
- step S510 is employed after step S310, that is, the region of interest in the enhanced three-dimensional ultrasound image data for characterizing the fluid region is segmented.
- step S510 Obtaining a cluster-like cluster body block, marking the cloud-shaped cluster body region block cluster body in the three-dimensional ultrasonic image data, obtaining body image data including the cluster body, and the specificity of step S510
- the implementation method refer to the related description of step S500.
- the fluid velocity vector information at the target point is marked and/or Before the cloud-like cluster body block, the three-dimensional ultrasonic image data can be converted into the volume image data of the perspective effect, which facilitates the transformation of the subsequent parallax image.
- different transparency is set hierarchically for the three-dimensional ultrasound image data.
- an angle observation angle at the time of subsequent parallax image conversion
- information display inside the scanning target such as blood vessel 930 in FIG. 13 and FIG. 14
- the fluid velocity vector information of the medium target point for example, the fluid velocity vector information of the marked target point in FIGS. 13 and 14 forms a fluid velocity vector identifier 920.
- the three-dimensional ultrasound image data is made into parallel slices (710, 711, 712), each of which is set to a different transparency, or a plurality of slices are sequentially provided with a stepwise gradient.
- Figure 13 (a) characterizes different transparency by different hatch fills.
- the transparency of the parallel cut surfaces (710, 711, 712) may be different, or may be stepwise changed in sequence.
- the stepwise variable transparency may be sequentially set for the plurality of cut surfaces, and the cut surface at the target position (ie, the core observation position) may be set to a low transparency, and then the transparency of the cut surface may be determined according to the positional relationship of the plurality of cut surfaces.
- the transparency corresponding to the plurality of cut surfaces on both sides of the cut surface is set to increase stepwise, or the transparency corresponding to the plurality of cut surfaces on both sides of the cut surface is set to a relatively high transparency, so that the background image can be weakened by the transparency setting.
- the aspect information at the target position ie, the core observation position
- the cut surface at the target position in this embodiment may be a cut surface, or may refer to multiple adjacent cut surfaces.
- the transparency of the parallel section 711 is 20%, and the parallel sections 710 and 712 may be 50%, respectively.
- the three-dimensional ultrasound image data may be hierarchically set with different transparency according to the observation angle of the two-way parallax image data.
- the observation angle of view herein may be a viewpoint position corresponding to the number of arbitrary parallaxes in the step of converting the volume image data into two-way parallax image data, or may be two observation angles for capturing the volume image data during playback. .
- the three-dimensional ultrasonic image data is made as a concentric spherical section (721, 722) centering on the observation point, and each of the sections is set to have different transparency, or a plurality of sections are sequentially set with steps. Transmutation transparency.
- the observation point in this embodiment can be selected by the user. For example, the spatial center point where the three-dimensional ultrasound image data exists can be used as the observation point.
- the step sizes of the plurality of parallel or concentric spherical sections of FIG. 13(a) and FIG. 13(b) above may be set as needed, and it is necessary to be able to display the internal information of the scanning target one by one.
- the transparency setting also takes into account the viewing angle when the parallax image is converted. Therefore, when the above three-dimensional ultrasonic image data is hierarchically set with different transparency, it may be considered to layer different transparency from the perspective of the viewing angle. In order to present the internal information of the scan target.
- the three-dimensional ultrasound image data is subjected to tissue structure segmentation, and the tissue structure regions obtained by the segmentation are set to have different transparency.
- 930 is a segment of a blood vessel image comprising a first layer of vessel wall tissue structure 931 and a second layer of vessel wall tissue structure 932, wherein the two layers of vessel wall tissue are distinguished by different transparency, as shown by different section lines in FIG. Different organizational structure areas.
- each frame of three-dimensional ultrasound image data is converted into a three-dimensional perspective rendering.
- the 3D drawing software here may include 3ds max software, or other software tools that can display stereo renderings, or homemade 3D drawing software tools.
- the perspective rendering manner of the three-dimensional perspective rendering diagram in this embodiment can also refer to the foregoing, for example, according to the result of the division of the organizational structure, the perspective effects are respectively set for different organizational structures.
- the volumetric image data of the perspective effect may be separately converted for each frame of the three-dimensional ultrasound image data, and the fluid velocity vector information of the target point may be sequentially labeled for each frame of the image according to the first mode or the second mode. For example, based on the second mode described above, the three-dimensional ultrasound image data is converted into volume image data of the perspective effect, and the fluid velocity vector information of the target point changes with time is marked in the volume image data to form the fluid velocity that can change with time.
- Vector identification and/or, marking a cloud-like cluster body block that changes over time in the volume image data.
- the generation of the volume image data may specifically be:
- different transparency is set hierarchically for each frame of the three-dimensional ultrasonic image data, and the fluid velocity vector information of the target point at the corresponding position is marked in each frame of the three-dimensional ultrasonic image data to obtain the inclusion flow.
- the single-frame volume image identified by the volume velocity vector and the continuous multi-frame volume image form the volume image data, so that when the volume image data is displayed, the fluid velocity vector identifier exhibits a flow-like visual effect that changes with time, that is, through the human eye.
- a fluid fluid velocity vector identification that changes over time can be observed in the 3D ultrasound image during observation.
- different transparency is set hierarchically for each frame of the three-dimensional ultrasonic image data, and the cluster-like cluster body block is formed into a cluster body in each frame of the three-dimensional ultrasonic image data to obtain a single frame including the cloud-like cluster body.
- the volume image, the multi-frame volume image continuous with time constitutes the volume image data, so that when the volume image data is displayed, the cluster body exhibits a roll-over visual effect that changes with time, that is, the 3D ultrasound image can be observed by the human eye. A cluster body that rolled over time was observed.
- the three-dimensional ultrasound image data is converted into a pair of three-dimensional perspective rendering images based on the three-dimensional drawing software, and the fluid velocity vector information at the corresponding position of the target point is marked in each three-dimensional rendering image to obtain a single volume containing the fluid velocity vector identifier.
- the frame body image, the multi-frame volume image continuous with time constitutes volume image data, so that when the volume image data is displayed, the fluid velocity vector identifier exhibits a fluid visual effect that changes with time.
- each frame of the three-dimensional ultrasonic image data into a three-dimensional perspective rendering image, and marking the cloud-shaped cluster body region block in each three-dimensional rendering image to obtain a single cloud-like cluster body
- the frame body image and the multi-frame volume image continuous with time constitute the above-described volume image data, so that when the volume image data is displayed, the cluster body exhibits a roll-over visual effect that changes with time.
- the three-dimensional ultrasonic image data is displayed as a dynamic spatial stereoscopic image based on the true three-dimensional stereoscopic image display technology, and the fluid velocity vector information of the target point changes with time is marked in the spatial stereoscopic image to obtain the volume image data, thereby displaying
- the volumetric image data causes the fluid velocity vector to identify a flow-like visual effect that changes over time.
- the data, thereby displaying the volumetric image data causes the cluster body to exhibit a roll-over visual effect that changes over time.
- True three-dimensional image display technology refers to displaying three-dimensional ultrasound image data in a certain physical space based on holographic display technology or body-based three-dimensional display technology, forming a true virtual space of the scanning target.
- a technique for stereoscopic images is to displaying three-dimensional ultrasound image data in a certain physical space based on holographic display technology or body-based three-dimensional display technology, forming a true virtual space of the scanning target.
- the holographic display technology of this paper mainly includes traditional hologram (transmissive holographic display image, reflective holographic display image, image holographic display image, rainbow holographic display image, synthetic holographic display image, etc.) and computer hologram (CGH) , Computer Generated Hologram).
- the computer hologram floats in the air and has a wide color gamut.
- the object used to generate the hologram needs to generate a mathematical model description in the computer, and the physical interference of the light wave is also replaced by the calculation step.
- the intensity pattern in the CGH model can be determined, which can be output to a reconfigurable device that remodulates the lightwave information and reconstructs the output.
- CGH is to obtain an interference pattern of computer graphics (virtual objects) through computer operation, instead of the interference process of light wave recording of traditional hologram objects; and the diffraction process of hologram reconstruction has no principle change, just A device that reconfigurable light wave information is added to realize holographic display of different computer static and dynamic graphics.
- the spatial stereoscopic display device 8 includes: a 360 holographic phantom imaging system, the system including a light source 820, a controller 830, a beam splitter 810, and a light source 820.
- a spotlight can be used, and the controller 830 includes one or more processors, and receives three-dimensional ultrasonic image data outputted from the data processing module 9 (or the image processing module 7 therein) through the communication interface, and is processed to obtain a computer graphic (virtual The interference pattern of the object is outputted to the beam splitter 810, and the light projected by the light source 810 on the beam splitter 810 exhibits the interference pattern to form a spatial stereoscopic image of the scanning target.
- the beam splitter 810 herein may be a special lens, or a four-sided pyramid or the like.
- the spatial stereoscopic display device 8 can also be based on a holographic projection device, for example, by forming a stereoscopic image on air, special lenses, fog screens, and the like. Therefore, the spatial stereoscopic display device 8 can also be an air holographic projection device, a laser beam holographic projection device, a holographic projection device having a 360-degree holographic display screen (the principle is to project an image on a mirror rotating at a high speed, thereby realizing a holographic image. ), and one of the equipment such as the fog screen stereo imaging system.
- the air holographic projection device is formed by projecting an interference pattern of a computer graphic (imaginary object) obtained in the above embodiment on a wall of an airflow to form a spatial stereoscopic image, which is vibrated by water molecules constituting water vapor. Unbalanced, a holographic image with a strong stereoscopic effect can be formed.
- the present embodiment adds an apparatus for forming an air flow wall based on the embodiment shown in FIG.
- the laser beam holographic projection apparatus is a holographic image projection system that uses a laser beam to project a solid, and a spatial stereoscopic image is obtained by projecting an interference pattern of a computer graphic (imaginary object) obtained in the above embodiment through a laser beam.
- a computer graphic imaging object
- the gas mixture of the two becomes a hot substance, and a holographic image is formed by continuous small explosion in the air.
- the fog screen stereo imaging system further includes an atomizing device for forming a water mist wall, and using the water mist wall as a projection screen, and the computer graphic obtained in the above embodiment (imaginary object) is further provided on the basis of the embodiment shown in FIG.
- the interference pattern forms a holographic image on the water mist wall by laser light, thereby obtaining a spatial stereoscopic image.
- the fog screen is imaged by laser light through the particles in the air, imaged in the air, using an atomizing device to create an artificial spray wall, using this layer of water fog wall instead of the traditional projection screen, combined with aerodynamics to produce a plane fog
- the screen is then projected onto the spray wall to form a holographic image.
- holographic display technology devices can participate in the related device structures currently available on the market.
- the present invention is not limited to the above-mentioned several devices or systems based on holographic display technology, and may also be used in the future.
- Holographic display device or technology is not limited to the above-mentioned several devices or systems based on holographic display technology, and may also be used in the future.
- Holographic display device or technology is not limited to the above-mentioned several devices or systems based on holographic display technology, and may also be used in the future.
- the body three-dimensional display technology refers to the use of human's own special visual mechanism to create a display object composed of voxel particles instead of molecular particles.
- the voxel can be touched.
- the real existence It stimulates the material located in the transparent display volume by appropriate means, and forms voxels by the absorption or scattering of visible radiation.
- a plurality of dispersed voxels can be formed in three dimensions.
- a three-dimensional image is formed in the space. Currently the following two are included.
- Rotating body scanning technology rotating body scanning technology is mainly used for display of dynamic objects.
- a series of two-dimensional images are projected onto a rotating or moving screen while the screen is moving at a speed that is not perceptible to the viewer, since the human vision persists to form a three-dimensional object in the human eye. Therefore, a display system using such stereoscopic display technology can realize true three-dimensional display of images (360° visible).
- Light beams of different colors in the system are projected onto the display medium through the light deflector, thereby The quality reflects the rich colors.
- the display medium allows the beam to produce discrete visible spots, which are voxels, corresponding to any point in the three-dimensional image.
- a set of voxels is used to create an image, and the observer can observe this true three-dimensional image from any viewpoint.
- the imaging space in a display device based on a rotating body scanning technique can be generated by rotation or translation of a screen.
- the voxel is activated on the emitting surface as the screen sweeps across the imaging space.
- the system includes subsystems such as a laser system, a computer control system, and a rotating display system.
- the spatial stereoscopic display device 8 includes a voxel solid portion 811, a rotation motor 812, a processor 813, an optical scanner 819, and a laser 814.
- the voxel solid portion 811 may be a rotating structure that can be used to accommodate a rotating surface, the rotating surface may be a helicoid, and the voxel solid portion 811 has a medium that can be displayed by laser projection.
- the processor 813 controls the rotation motor 812 to drive a rotating surface in the voxel solid portion 811 to rotate at a high speed, and then the processor 813 controls the laser to generate three R/G/B laser beams, and will be concentrated into a chromatic light beam passing through the optical scanner 819.
- a plurality of color bright spots are generated on the rotating surface in the voxel solid portion 811.
- the rotation speed is fast, a plurality of body pixels are generated in the voxel solid portion 811, and a plurality of body pixels are aggregated to form a suspended spatial stereo image.
- the rotating surface may be an upright projection screen located in the voxel solid portion 811, and the rotation frequency of the screen may be up to 730 rpm. It is made of very thin translucent plastic.
- the processor 813 firstly splits the three-dimensional ultrasound image data into a plurality of cross-sectional views (rotating along the Z-axis, and averaging each X-degree of rotation (for example, 2 degrees) to intercept one perpendicular to the image.
- the longitudinal section of the XY plane, the vertical projection screen is less than X degrees per rotation, and a profile view is projected on the vertical projection screen.
- the vertical projection screen rotates at high speed, multiple sections are rotated and projected onto the vertical projection screen at high speed. At the time, a natural 3D image that can be viewed in all directions is formed.
- the spatial stereoscopic display device 8 includes a voxel solid portion 811 having an upright projection screen 816, a rotation motor 812, a processor 813, a laser 814, and an illumination array 817, and a plurality of light beams are disposed on the illumination array 817.
- the light-emitting array 817 can use three micro-electromechanical systems (MEMS)-based DLP optical chips, each of which is provided with more than one million digits.
- MEMS micro-electromechanical systems
- a high-speed light-emitting array composed of Digital Micro-Mirrors, which are responsible for R/G/B three-color images, respectively, and combined into one image.
- the processor 813 controls the rotation motor 812 to drive the upright projection screen 816 to rotate at a high speed, and then the processor 813 controls the laser to generate three R/G/B laser beams, and inputs the three laser beams to the illumination array 817, and projects the composite beam through the illumination array 817.
- a high-speed rotating upright projection screen 816 (where the beam can also be projected onto the upright projection screen 816 by means of the reflection of the relay optics), a plurality of display body pixels are generated, and the plurality of body pixels can be aggregated to form a suspended voxel entity.
- Static body imaging technology is based on the frequency up-conversion technology to form a three-dimensional stereoscopic image.
- the so-called frequency up-conversion three-dimensional stereoscopic display uses the imaging space medium to absorb a plurality of photons and spontaneously radiates a kind of fluorescence, thereby producing visible pixel.
- the basic principle is to use two mutually perpendicular infrared lasers to cross the upper conversion material. After the two resonance absorptions of the upconversion material, the luminescent center electrons are excited to a high excitation level, and then the next level transition can be generated. The emission of visible light, such a point in the space of the up-converting material is a bright spot of illumination.
- the two lasers are The area scanned by the intersection should be a bright band that emits visible fluorescence, that is, it can display the same three-dimensional graphics as the laser intersection.
- This display method allows the naked eye to see a 360-degree view of the three-dimensional image.
- the static volume imaging technology is provided in the voxel solid part 811 in each of the above embodiments, and the medium is composed of a plurality of liquid crystal screens arranged at intervals (for example, the resolution of each screen is 1024 ⁇ 748, screen and screen) The spacing between the liquid crystal pixels of these special liquid crystal screens has a special electronically controlled optical property.
- the liquid crystal pixels When a voltage is applied thereto, the liquid crystal pixels will be parallel to the beam propagation mode like the leaf surface of the louver. Thereby, the light beam that illuminates the point passes transparently, and when the voltage is zero, the liquid crystal pixel will become opaque, thereby diffusely reflecting the illumination beam to form a body existing in the liquid crystal panel laminate.
- the rotary motor in Figs. 16 and 17 can be canceled at this time.
- the 3D Depth Anti-Aliasing display technology can also be used to expand the depth perception of the plurality of spaced-apart LCD screens, so that the spatial resolution of the 1024 ⁇ 748 ⁇ 20 physical body is realized. Up to 1024 ⁇ 748 ⁇ 608 display resolution; as in the embodiment shown in Figure 17, this embodiment can also use DLP imaging technology Surgery.
- the above content only introduces several kinds of devices of the body three-dimensional display technology, and specifically can participate in the related device structures currently available on the market.
- the present invention is not limited to the above-mentioned several devices or systems based on the body three-dimensional display technology, and It is possible to adopt a stereoscopic three-dimensional display technology that may exist in the future.
- the spatial stereoscopic image of the scanning target may be displayed in a certain space or in any space, or the display target space may be presented based on display media such as air, lens, fog screen, rotating or stationary voxel.
- the stereoscopic image is then labeled with the fluid velocity vector information of the target point as a function of time in the spatial stereoscopic image to obtain volumetric image data.
- the image mapping relationship between the volumetric image data and the imaging range of the spatial stereoscopic image may be used to obtain the spatial stereoscopic image according to the position of the target image in the spatial image data.
- the position of the fluid velocity vector information that marks the target point as a function of time in the spatial stereo image.
- the blood flow velocity vector information of the target point can be marked in the volume image data in the following manner.
- the fluid velocity vector information of the target point obtained by using the first mode described above is marked on the volume image data 900, as shown in FIG. 18, 910 represents a part of the blood vessel schematic, which is used in the figure.
- the cube with the arrow marks the fluid velocity vector information of the target point, wherein the arrow direction indicates the direction of the fluid velocity vector of the target point at this time, and the length of the arrow can be used to indicate the magnitude of the fluid velocity vector at the target point.
- an arrow 922 indicated by a solid line indicates fluid velocity vector information of a target point at a current time
- an arrow 921 indicated by a broken line indicates fluid velocity vector information of a target point at a previous moment.
- the image effect of the core tissue structure of the volume image data is shown, the object at a position close to the observation point is large, and the object at a position far from the observation point is small.
- the fluid velocity vector information of the target point obtained by using the second mode described above is marked on the volume image data, that is, the fluid velocity vector information of the target point includes: the target point continuously moves to The obtained fluid velocity vector is sequentially corresponding to the corresponding position in the three-dimensional ultrasonic image data; then in step S500, the corresponding fluid velocity vector is obtained when the marker target point is continuously moved to the corresponding position, forming a fluid velocity that changes with time.
- Vector logotype As shown in FIG. 19, in order to exhibit a stereoscopic display effect, the object at a position close to the observation point is large, and the object at a position far from the observation point is small. In FIG.
- the fluid velocity vector information of the target point is marked by the arrow 940, wherein the arrow direction indicates the direction of the fluid velocity vector at the target point, and the length of the arrow can be used to indicate the magnitude of the fluid velocity vector at the target point.
- . 930 is a section of blood vessel image.
- the arrowed sphere 941 shown by the solid line indicates the fluid velocity vector information of the target point at the current time
- the arrowed sphere 942 indicated by the broken line indicates the fluid velocity of the target point at the previous moment.
- Vector information When the fluid velocity vector information of the target point is obtained by the second mode described above, the marker 940 that flows in time with time is superimposed on the volume image data.
- 930 is a segment of a blood vessel image that includes a first layer of vessel wall tissue structure 931 and a second layer of vessel wall tissue structure 932, wherein the two layers of vessel wall tissue are distinguished by different colors.
- the blood flow velocity vectors of the target points are marked by the arrows 973 and 962 in the two sets of blood vessels 960 and 970, respectively, and the stereoscopic image regions 971, 972 of other tissue structures are also included.
- 961 are marked with other colors to distinguish them.
- the color marks in the area are different by the type of the filling hatching in the area.
- the display information is distinguished, and the stereoscopic image region for presenting each organizational structure according to the anatomical organization structure and the hierarchical relationship is included in the volume image data, and the color parameters of each stereoscopic image region are configured to The adjacent stereoscopic image area is displayed separately.
- the contours of the stereoscopic image regions of the respective tissue structures can be displayed to avoid covering or confusing the fluid velocity vector identification. For example, as shown in FIG. 18, for a segment of blood vessel 910, its outer contour line, and/or some cross-sectional contour lines may be displayed to indicate the image region in which the fluid velocity vector information identifier 920 is located, thereby highlighting the fluid velocity vector.
- the logo 920 is displayed and the fluid velocity vector designation 920 is more intuitively and clearly presented.
- the corresponding gray-ray blood flow imaging technique is obtained.
- Grayscale features or velocity information can also be used to display in a 3D ultrasound image when the output is displayed. For example, whether it is an integral 3D data of enhanced 3D ultrasound image data
- the body is processed, or is treated as a plurality of two-dimensional images for separate processing, and the corresponding cluster body region block can be obtained in the enhanced three-dimensional ultrasonic image data of each frame in the following manner.
- step S500 When performing step S500, first, segmenting the region of interest in the one-frame or multi-frame enhanced three-dimensional ultrasound image data to characterize the fluid region to obtain a cloud-like cluster body region block; marking the cloud shape in the three-dimensional ultrasound image data
- the cluster body block forms a cluster body, and the volume image data including the cluster body is obtained, so that the cluster body which is rolled over with time is presented in the 3D ultrasonic image.
- the cluster bodies at different times are sequentially represented by 950, 951, and 952 of different line types. As time passes, it can be seen that the cluster body rolls over with time, vividly representing the fluid. The overall rolling situation gives the observer a full view of the perspective.
- the region of interest may be segmented based on the image gradation attribute.
- Fig. 21(b) an effect diagram in which the fluid velocity vector information of the target point is marked by the spherical body 940 with an arrow is superimposed on Fig. 21(a).
- the cluster body block in which the blood flow is expressed is superimposed with color information such as white or orange, so as to distinguish.
- the image is used to characterize the fluid region based on the image grayscale segmentation enhanced three-dimensional ultrasound image data.
- the inner space point gray scale maximum or minimum value, etc. is used to represent the value of the gray level characteristic of the entire area block or a set of attribute values.
- different grayscale feature cluster body region blocks are rendered by different colors. For example, if the cluster body block obtained by the segmentation is classified according to the gray feature attribute and is classified into 0-20 classes, then each corresponding class uses one color to mark the display color, or 0-20 classes respectively use the same color. Colors of different purity under the hue are used to mark the display.
- the cluster body region blocks 953 and 954 may be marked with different colors to indicate their gradation characteristics due to the velocity.
- the above-described image grayscale-based segmentation method can also be used. Obtain a block of different gray scales and superimpose different colors according to the gray scale changes of different regions in the cluster block.
- the cluster block blocks 953 and 954 are formed by different cross-section lines. The different regions in the body are filled to represent that different colors are superimposed for rendering. For the color rendering manner, the above embodiment may also be adopted.
- different regions in the cluster body region block are classified according to the grayscale feature attribute, and are divided into multiple categories, and then each corresponding type adopts a hue ( Or hue) to mark the display color, or multiple categories to mark the display with different colors of the same hue (or hue).
- the cluster body region blocks 953 and 954 are marked with different colors to characterize the velocity information of their corresponding fluid regions.
- the present invention actually provides another display mode, as shown in FIG. 21 and FIG. 22, wherein the mode switching command can be input by the user from the current display mode. Switching to the display mode obtained by displaying the volumetric image data including the cluster body so that the cluster body exhibits a time-varying roll-over visual effect upon output display.
- the step S500 of marking the fluid velocity vector information of the target point in the three-dimensional ultrasonic image data when the step S500 of marking the fluid velocity vector information of the target point in the three-dimensional ultrasonic image data is performed, by configuring the fluid velocity vector identification (920, 940, 973, 962, 981, 982) a combination of one or more of the color, the three-dimensional shape, and the transparency, and the background image portion in the volume image data (ie, a stereoscopic image region of other tissue structures in the volume image data, such as a blood vessel wall region, The lung area, etc.) is displayed separately. For example, if the vessel wall is green, then the fluid velocity vector is marked in red, or the vessel wall and fluid velocity vector markers of the artery are both red, while the vessel wall and fluid velocity vector markers are both green.
- one of color, stereoscopic shape, transparency of the fluid velocity vector identification (920, 940, 973, 962, 981, 982) for marking fluid velocity vector information in the volume image data or Two or more parameters are combined to distinguish different rate levels and directions that display fluid velocity vector information.
- intra-arterial fluid velocity vector identification uses a gradual red color
- the color of each stage in the color indicates different rate grades
- the fluid velocity vector of the vein identifies the different speed grades in each stage color in the gradient green system. Dark red or dark green indicates fast speed, light green or light red indicates slow speed.
- the matching method of colors please refer to the relevant color science knowledge, which will not be enumerated in detail here.
- the fluid velocity vector identification includes a stereoscopic marker with an arrow or a directional guide.
- a stereoscopic marker with an arrow or a directional guide For example, the cube with an arrow in Fig. 18, the sphere with an arrow in Fig. 19, or a prism with a arrow, a cone, the direction of the velocity vector can be pointed by the tip of the cone, or a truncated cone can also be used.
- the small head may be used as the direction guiding portion, or the direction in which the long diagonal side of the three-dimensional mark having a vertical cross section is in the direction of the fluid velocity vector may be used, or both ends of the long axis of the ellipsoid may be used as
- the direction guide is used to characterize the direction of the fluid velocity vector, etc., and the present invention is not limited to the shape identified by the fluid velocity vector. Any one of the directional markers with a directional guide may be used herein to mark the fluid velocity vector of the target point. Therefore, in order to more intuitively understand the fluid velocity vector information of the target point, the direction of the fluid velocity vector can be characterized by the arrow or direction guide of the stereo marker, and the magnitude of the fluid velocity vector can be represented by the volume size of the stereo marker.
- the fluid velocity vector identification may also employ a three-dimensional marker without an arrow or a directional guide, such as a sphere, an ellipsoid, a cube, a rectangular parallelepiped, or the like, of any shape. Therefore, in order to more intuitively understand the fluid velocity vector information of the target point, the magnitude of the fluid velocity vector can be represented by the rotational velocity or volume size of the stereo marker, and the fluid velocity vector can be displayed by moving the stereo marker over time.
- the direction for example, the manner of the second mode described above can be used to calculate the fluid velocity vector of the target point, thereby obtaining a fluid velocity vector identification that changes in flow over time.
- the rotation speed or volume size of the stereo marker is associated with the magnitude of the fluid velocity vector in order to facilitate labeling on the volume image data or the three-dimensional ultrasound image data.
- the direction of rotation may be the same or different for all the three-dimensional markers, and the rotation speed is the speed that the human eye can recognize.
- an asymmetric three-dimensional marker may be used, or Stereoscopic markers with markers.
- the rotational speed of the stereo marker can be used to indicate the magnitude of the fluid velocity vector.
- the arrow pointing is used to characterize the direction of the fluid velocity vector. Therefore, in the present invention, it is not limited to the above various combinations indicating the magnitude or direction of the fluid velocity vector.
- the fluid velocity vector can be expressed by the volume size or rotational velocity of the stereo marker used to mark the target point fluid velocity vector. The size, and/or the direction of the fluid velocity vector is characterized by the direction of the arrow on the steric marker, the orientation of the directional guide, or the movement of the steric marker over time.
- the fluid velocity vector information of the target point obtained by using the second mode described above is superimposed on the volume image data, that is, the fluid velocity vector information of the target point includes: the target point continuously moves to the three-dimensional ultrasonic image.
- the corresponding fluid velocity vector is sequentially corresponding to the corresponding position in the data; then in step S500, the same target point can be continuously moved to a plurality of corresponding positions in the three-dimensional ultrasonic image data by the associated flag (for example, two or more corresponding positions) Position), forming a motion stroke track of the target point for outputting a motion stroke track when displaying.
- the associated markers for displaying the motion path trajectory include an elongated cylinder, a segmented elongated cylinder or a dovetailed logo, and the like.
- the stereoscopic display effect is exhibited, the object at a position close to the observation point is large, and the object at a position far from the observation point is small.
- Fig. 22 the stereoscopic display effect is exhibited, the object at a position close to the observation point is large, and the object at a position far from the observation point is small.
- 930 is a blood vessel image, and a fluid velocity vector identifier (spherical sphere 981 or sphere 982 with an arrow) for marking the blood flow velocity vector information of the target point, starting from the initial position of the fluid velocity vector, and sequentially passing through the elongated
- the cylinder or segmented elongated cylinder 991 continuously moves to a plurality of corresponding positions in the volume image data across the same target point to form a motion forming trajectory, so that the observer can understand the movement mode of the target point as a whole.
- another way of displaying the trajectory is also given in FIG.
- a fluid velocity vector mark 982 is followed by a long tail, similar to the tail of the comet.
- the method further includes:
- the associated marker related parameters of the illustrated motion path trajectory are: a logo shape of the associated flag, or a logo shape of the connection line and a color thereof.
- Colors herein include any color obtained by changing the hue (hue), saturation (purity), contrast, transparency, etc., and the aforementioned mark shapes can be in various forms, which can be elongated cylinders, segments Any of a variety of elongated cylinders and dovetail marks can describe the direction of the mark.
- the present invention actually provides another display mode, as shown in FIG. 22, wherein the mode switching command input by the user can be switched from the current display mode to the Displaying a display mode of the motion path trajectory of the target point, that is, performing the above steps of sequentially moving the same target point through the associated flag to a plurality of corresponding positions in the three-dimensional ultrasound image data to form a motion path trajectory of the target point Display mode.
- the target point that can depict the motion path trajectory may be single or multiple, and the initial position may be obtained by an instruction for input, such as obtaining a distribution density instruction input by the user, according to the distribution density instruction. Selecting the target point randomly within the scan target; or acquiring a mark position instruction input by a user, and obtaining the target point according to the mark position instruction.
- step S500 if the three-dimensional ultrasound image data is displayed as a dynamic spatial stereoscopic image based on the true three-dimensional stereoscopic image display technology, the method of marking the fluid velocity vector information of the target point with time in the spatial stereoscopic image, for example, how to configure For the color, the shape of the logo, and the like, reference may be made to the method of marking the fluid velocity vector information of the target point in the volume image data, which will not be described here.
- the three-dimensional ultrasound image data is displayed as a dynamic spatial stereoscopic image based on a true three-dimensional stereoscopic image display technology, and the fluid velocity vector information of the target point changes with time is marked in the spatial stereoscopic image to obtain the volumetric image data.
- the following technical solutions can also be included:
- step S600 the parallax image generating module converts the volume image data into two-way parallax image data.
- a volume image of a first time phase adjacent to a temporally adjacent body image data 900 and a body image of a second time phase are extracted, according to
- the volume image of the first time phase generates one parallax image data with an arbitrary parallax number N, and another parallax image data is generated by the same parallax number according to the volume image of the second time phase, thereby obtaining the two-way parallax image data.
- the volume image of the first time phase and the volume image of the second time phase may be converted into two-way parallax image data according to nine parallaxes, and each of the parallax image data includes nine parallax images.
- the volume image of the first time phase and the volume image of the second time phase are respectively converted into two-way parallax image data according to two parallaxes, and each of the parallax image data includes two parallax images.
- the arbitrary parallax number may be a natural number greater than or equal to 1, wherein the volume images for each time phase are moved one by one to the corresponding viewpoint position by a predetermined parallax angle.
- the two-way parallax image data is output and output according to the moving order of the time phase and the viewpoint position. For example, when the two-way parallax image data is output, the volume image according to the first time phase is output first.
- the frame body images respectively correspond to the generated plurality of parallax images obtained in the order of movement of the viewpoint positions.
- the above-described volume image data is played, and the left and right eyes of the simulated person establish two observation angles, and the above-mentioned body image data during playback are respectively observed in the above two observations.
- the angle of view is taken to acquire the above two-way parallax image data.
- Each frame in the volume image data is converted into two-way parallax image data by two observation angles.
- the played volume image data 900 is displayed on the display 901, and then the position of the light source is positioned, and the first virtual camera and the second virtual The camera position is taken for two observation angles to obtain the above two parallax image data for outputting a display on the display screen display device so that the human eye can observe the 3D ultrasound image.
- the display 901 can be a flat display of the image processing end or the above display display device.
- the process of FIG. 26 can also be run only inside the background host without being displayed.
- the above method for converting volume image data into two-way parallax image data can be performed by software programming using a software program to implement the functions of the above-described parallax image generating module, for example, by software programming.
- the three-dimensional ultrasound image data or the above-described volume image data can be converted into two-way image data.
- the 3D image processing module marks the fluid velocity vector information of the target point with time in the three-dimensional ultrasonic image data, and obtains the volume image data including the fluid velocity vector identifier; using the spatial stereoscopic display device based on the true
- the three-dimensional stereoscopic image display technology displays the volume image data as a dynamic spatial stereoscopic image, wherein the spatial stereoscopic display device comprises one of a holographic display device based on the holographic display technology and a volume pixel display device based on the stereoscopic three-dimensional display technology; the display here It can be real-time acquisition display or non-real-time display.
- the parallax image generating module 12 includes a first imaging device 841 and a second imaging device 842, and the first imaging device 841 and the second imaging device 842 respectively capture the dynamic spatial stereoscopic image to obtain the two-way parallax image data.
- the first camera 841 and the second camera 842 can be any type of imaging device such as an optical camera, an infrared camera, or the like.
- the display screen display device 8 outputs a display effect of displaying the two-way parallax image data to obtain a 3D ultrasonic image when viewed by a human eye.
- the display display device 8 herein may be based on a glasses-like 3D display technology or a naked-eye 3D display technology.
- the display screen display device 8 may include a display screen for receiving and displaying the two-way parallax image data and wearable glasses.
- the 3D display technology of glasses is mainly realized by special glasses using optical principles.
- the 3D glasses used in the market mainly include shutter type and polarized type. From the perspective of viewing mode, there are mainly passive viewing and active viewing.
- the active viewing type 3D glasses use the active operation of the glasses themselves to display the 3D effect, and there are two types of dual display type 3D glasses and liquid crystal type 3D glasses.
- Dual-display type 3D glasses Although the dual-display type cannot provide the demand for multi-person viewing, it is still a kind of active 3D glasses. The principle is to use two sets of small displays arranged in the left and right glasses to separate them. The left and right screens are displayed to form a 3D effect.
- Liquid crystal type 3D glasses which are composed of active liquid crystal lenses. The principle is to use an electric field to change the state of liquid crystal transmission, and to alternately obscure the left and right eyes at a frequency of several tens of times per second.
- the sync signal is used to synchronize the liquid crystal 3D glasses with the screen.
- the right lens is blackened, and the left eye is blacked out when the right eye is played.
- a 3D effect is formed, but this alternating masking affects the brightness of the picture.
- the two-way parallax image data is actually an image that simulates entering the left and right eyes respectively.
- how to output and display the two-way parallax image data to obtain the 3D display effect of the glasses refer to the related prior art, which will not be described here.
- the display screen display device 8 may include a naked eye 3D display screen for receiving and displaying the two-way parallax image data.
- the naked-eye 3D display technology combines the latest panel manufacturing technology and engine software technology.
- the Integral Imaging method is used to display the lenticular lens in front of the liquid crystal panel, that is, on the same screen.
- the 3D display is realized by a divided area display (spatial multi-optical naked eye 3D technology) and a cutting time display (time-sharing multi-function naked eye 3D technology).
- image display the parallax of the left and right eyes of the existing 2D image and the 3D image is converted into a 9-dimensional 3D image by computer image processing technology.
- lenticular lens Lenticular Lens
- the lenticular lens technology also known as lenticular lens or micro-column lens technology, is a technique that applies a special precision cylindrical lens screen to the liquid crystal panel to independently feed the encoded 3D image into the left and right eyes of the person. Therefore, 3D can be experienced with the naked eye, and it is compatible with 2D.
- Multi-layer display It can realize 3D text and 3D image through naked eyes through two liquid crystal panels that overlap at a certain interval, and deep-diffused 3D display (Depth-fused 3D), that is, two liquid crystal panels are overlapped before and after, respectively, on the front and rear two LCD panels.
- the foreground and back images are displayed in different brightness, and the depth of field effect is expressed by the difference in depth of the entity.
- Directional Backlight The method is to match the two groups of fast-reacting LCD panels and drivers, so that the 3D images enter the left and right eyes of the viewer in a sorted manner.
- the two-way parallax image data is actually an image that simulates entering the left and right eyes respectively.
- the output display two-way parallax image data to obtain a naked-eye 3D display effect which can be referred to the related prior art, and will not be described here.
- FIG. 27(b) a visual effect diagram showing a flow-like blood flow velocity vector identifier in the obtained 3D ultrasound image when the image displayed on the display screen display device 8 is seen naked is given
- FIG. 27(a) A visual effect diagram showing the roll-shaped cluster body in the obtained 3D ultrasound image when the image displayed on the display screen display device 8 is seen naked is given.
- Figure 8 are schematic flow diagrams of an ultrasound imaging method in accordance with some embodiments of the present invention. It should be understood that although the various steps in the flowchart of FIG. 8 are sequentially displayed as indicated by the arrows, these steps are not necessarily performed in the order indicated by the arrows. Except as explicitly stated herein, the execution of these steps is not strictly limited, and may be performed in other sequences. Moreover, at least some of the steps in FIG.
- 8 may include a plurality of sub-steps or stages, which are not necessarily performed at the same time, but may be executed at different times, and the order of execution thereof is not necessarily This may be performed in sequence, but may be performed in parallel or alternately with other steps or at least a portion of the sub-steps or stages of the other steps.
- the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product carried on a non-transitory computer readable storage carrier (eg The ROM, the disk, the optical disk, and the server cloud space include instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
- a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.
- an ultrasound imaging system comprising:
- a transmitting circuit 2 configured to excite the probe 1 to scan a target emitter ultrasonic beam
- a receiving circuit 4 and a beam combining module 5 configured to receive an echo of the bulk ultrasonic beam to obtain a bulk ultrasonic echo signal
- the data processing module 9 is configured to acquire at least a part of the three-dimensional ultrasound image data of the scan target according to the volume ultrasonic echo signal, and obtain fluid velocity vector information of the target point in the scan target based on the volume ultrasonic echo signal;
- the 3D image processing module 11 is configured to mark the fluid velocity vector information of the target point in the three-dimensional ultrasonic image data to form a fluid velocity vector identifier, and obtain the volume image data including the fluid velocity vector identifier;
- the parallax image generating module 12 is configured to convert the volume image data into two-way parallax image data, and the display screen display device 8 is configured to receive and display the two-way parallax image data.
- the transmitting circuit 2 is configured to perform the above step S100.
- the receiving circuit 4 and the beam combining module 5 are configured to perform the above step S200.
- the data processing module 9 includes a signal processing module 6 and/or an image processing module 7, and the signal processing module 6 is used.
- Performing the above-mentioned calculation process of the speed division vector and the fluid velocity vector information, that is, the foregoing step S400, and the image processing module 7 is configured to perform the above-described process related to the image processing, that is, the foregoing step S300 is based on the above-mentioned bulk ultrasonic echo signal obtained, Obtaining three-dimensional ultrasound image data of at least a portion of the above-described scan target.
- the 3D image processing module 11 is configured to perform the above step S500, and the parallax image generating module 12 is configured to perform step S600.
- the display display device 8 performs a 3D ultrasonic imaging display, and performs the above-described step S700.
- the 3D image processing module 11 is further configured to mark the fluid velocity vector sequentially obtained when the target point continuously moves to the corresponding position in the three-dimensional ultrasonic image data, so that the fluid velocity vector is identified on the output. Display a time-varying flow-like visual effect.
- the display screen display device 8 includes: a display screen for receiving and displaying the two-way parallax image data and wearable glasses, or for receiving and displaying the two-way parallax image data. Naked eye 3D display. For details, please refer to the relevant description in the previous section.
- the echo signals of the body plane ultrasound beam are used to calculate the relevant Fluid velocity vector and fluid velocity vector information, as well as three-dimensional ultrasound image data.
- the transmitting circuit is configured to excite the probe to emit a body plane ultrasonic beam to the scanning target;
- the receiving circuit and the beam combining module are configured to receive an echo of the plane body ultrasonic beam to obtain a body plane ultrasonic echo signal;
- the data processing module is further configured to The planar ultrasonic echo signal acquires three-dimensional ultrasonic image data of at least a part of the scanning target and fluid velocity vector information of the target point.
- the echo signal of the body plane ultrasonic beam is used to calculate the velocity vector and the fluid velocity vector information
- the echo signal of the body focused ultrasound beam is used to obtain a high quality ultrasound image
- the above transmitting circuit excites the
- the probe focuses the ultrasonic beam on the scanning target emitter
- the receiving circuit and the beam combining module are configured to receive the echo of the body focused ultrasonic beam to obtain a body focused ultrasonic echo signal
- the data processing module is configured to focus the ultrasonic echo signal according to the body Obtaining three-dimensional ultrasound image data of at least a portion of the scan target.
- the above-mentioned transmitting circuit excites the probe to emit a body plane ultrasonic beam to the scanning target, and inserts the process of focusing the ultrasonic beam to the scanning target emitter during the process of transmitting the planar ultrasonic beam to the scanning target; the receiving circuit and the beam combining module And receiving the echo of the body plane ultrasonic beam to obtain a body plane ultrasonic echo signal; the data processing module is configured to obtain fluid velocity vector information of the target point in the scan target according to the body plane ultrasonic echo signal.
- the data processing module is further configured to obtain, by the gray-scale blood flow imaging technique, enhanced three-dimensional ultrasound image data of at least a portion of the scan target according to the volume ultrasound echo signal.
- the 3D image processing module is further configured to segment the region of interest in the enhanced three-dimensional ultrasound image data for characterizing the fluid region, obtain a cloud-like cluster body region block, and mark the cloud-like cluster in the three-dimensional ultrasound image data.
- the body region block is displayed to obtain volumetric image data including the cluster body such that the cluster body exhibits a roll-over visual effect that changes with time as the output is displayed.
- the system further includes: a human-machine interaction device, configured to acquire a command input by the user; and the 3D image processing module is further configured to perform at least one of the following steps:
- the type of the transmitting circuit for exciting the ultrasonic beam of the probe to the scanning target emitter is switched according to a command input by the user.
- the 3D image processing module is configured to mark the fluid velocity vector information of the target point with time in the three-dimensional ultrasonic image data to obtain the volume image including the fluid velocity vector identifier.
- Data the above system also includes a spatial stereoscopic display device 800, configured to display the volumetric image data as a dynamic spatial stereoscopic image based on a true three-dimensional stereoscopic image display technology, wherein the spatial stereoscopic display device 800 includes a holographic display device based on a holographic display technology and a volumetric pixel display device based on a stereoscopic three-dimensional display technology
- the parallax image generating module includes a first imaging device 841 and a second imaging device 842.
- the first imaging device 841 and the second imaging device 842 capture the dynamic spatial stereo image from two angles to obtain the two-way parallax. Image data.
- the first camera device and the second camera device may have the same structure, for example, both an infrared camera, an optical camera, and the like.
- the above spatial stereoscopic display device 8 includes one of a holographic display device based on a holographic display technology and a volume pixel display device based on a bulk three-dimensional display technology.
- a holographic display device based on a holographic display technology
- a volume pixel display device based on a bulk three-dimensional display technology.
- the human-machine interaction device 10 includes an electronic device 840 with a touch display connected to the data processing module.
- the electronic device 840 is connected to the data processing module 9 via a communication interface (wireless or wired communication interface) for receiving three-dimensional ultrasound image data and fluid velocity vector information of the target point for display on the touch display screen, and presenting the ultrasound image (the The ultrasound image may be a two-dimensional or three-dimensional ultrasound image displayed based on the three-dimensional ultrasound image data) and fluid velocity vector information superimposed on the ultrasound image; receiving an operation command input by the user on the touch screen display, and transmitting the operation command to the data processing
- the operation command of the module 9 may include any one or several commands input by the user according to the data processing module 9; the data processing module 9 is configured to obtain a related configuration or a switching instruction according to the operation command, and transmit the data to the spatial stereo display.
- the device 800 is configured to adjust a display result of the spatial stereoscopic image according to the configuration or the switching instruction, to synchronously display the image rotation performed according to the operation command input by the user on the touch display screen on the spatial stereoscopic image, Image parameter configuration, image display mode switching, etc. fruit.
- the spatial stereoscopic display device 800 employs the holographic display device shown in FIG. 15, and then the ultrasonic image and the fluid velocity vector information superimposed on the ultrasonic image are synchronously displayed on the electronic device 840 connected to the data processing module 9. This provides a way for the viewer user to enter an operational command and interact with the displayed spatial stereo image in this manner.
- the human-machine interaction device 10 may also be a physical operation key. (such as keyboard, joystick, scroll wheel, etc.), virtual keyboard, or gesture input device such as with a camera.
- the gesture input device here includes: an apparatus for capturing a gesture input by acquiring an image, and using an image recognition technology to track a gesture input, for example, acquiring an image of the gesture input by an infrared camera to obtain an operation instruction represented by the gesture input by using an image recognition technology.
- the present invention also provides a three-dimensional ultrasonic fluid imaging system, comprising:
- a transmitting circuit 2 configured to excite the probe 1 to scan a target emitter ultrasonic beam
- a receiving circuit 4 and a beam combining module 5 configured to receive an echo of the bulk ultrasonic beam to obtain a bulk ultrasonic echo signal
- the data processing module 9 is configured to obtain, according to the volume ultrasonic echo signal, the enhanced three-dimensional ultrasound image data of at least a part of the scanning target by using a gray-scale blood flow imaging technology;
- the 3D image processing module 11 is configured to segment the region of interest in the enhanced three-dimensional ultrasound image data for characterizing the fluid region, obtain a cloud-like cluster body region block, and mark the cloud-shaped cluster in the three-dimensional ultrasound image data. a cluster region block to obtain volume image data including a cloud-like cluster body;
- a parallax image generating module 12 configured to convert the volume image data into two-way parallax image data
- the display screen display device 8 is configured to output and display the two-way parallax image data so that the human eye can observe the visual effect of the tufted body that is rolled over time.
- the transmitting circuit 2 is configured to perform the above step S100.
- the receiving circuit 4 and the beam combining module 5 are configured to perform the above step S200.
- the data processing module 9 includes a signal processing module 6 and/or an image processing module 7, and the signal processing module 6 is used.
- Performing processing on the synthesized echo signal, and the image processing module 7 is configured to perform the image processing process on the enhanced three-dimensional ultrasonic image data, that is, the foregoing step S310 is based on the volume ultrasonic echo signal obtained in the preset time period.
- the 3D image processing module 11 is configured to perform the segmentation and labeling process for the cluster body in the enhanced three-dimensional ultrasound image data in the above step S510, and the parallax image generating module 12 is configured to perform step S600.
- the display display device 8 performs a 3D ultrasonic imaging display, and performs the above-described step S700.
- the execution steps of the above various functional modules are as described above. The relevant steps of the method for displaying the ultrasound imaging are not described here.
- the 3D image processing module is further configured to convert the three-dimensional ultrasound image data into volume image data of the perspective effect, and mark the cloud-like cluster body block that changes with time in the volume image data. .
- the 3D image processing module is further configured to:
- the frame body image constitutes the above-described volume image data
- each frame of the three-dimensional ultrasonic image data is converted into a three-dimensional perspective rendering image, and the cloud-shaped cluster body region block is marked in each three-dimensional rendering image to obtain a single-frame body image containing the cloud-like cluster body.
- the multi-frame volume image continuous with time constitutes the above-described volume image data.
- the 3D image processing module is further configured to perform the following steps to convert the three-dimensional ultrasound image data into volume image data of a perspective effect:
- the three-dimensional ultrasound image data is subjected to tissue structure segmentation, and the tissue structure regions obtained by the segmentation are set to have different transparency.
- the foregoing 3D image processing module is further configured to:
- step of dividing the region of interest of the fluid region to obtain a cloud-like cluster body region block in the above-mentioned segmentation of the enhanced three-dimensional ultrasound image data segmenting the enhanced three-dimensional ultrasound image data based on the image gray scale to characterize the fluid a region of interest of the region, obtaining cluster block regions of different grayscale features, and rendering the different grayscale feature cluster region blocks by different colors in the above three-dimensional ultrasound image data;
- the same cloud-like cluster body block obtained by the segmentation is rendered by superimposing different colors according to the gray scale changes of different regions in the cluster body block.
- the present invention breaks through the deficiencies of the existing ultrasound imaging system in blood flow imaging technology, A three-dimensional ultrasonic fluid imaging method and an ultrasonic imaging system are provided, which are suitable for imaging and displaying blood flow information, and provide a better 3D ultrasound image through 3D stereoscopic display technology through the current advanced display screen.
- the observation angle realizes the real-time understanding of the scanning position, and also makes the image display effect more realistic to visualize the blood flow information, and truly reproduces the fluid movement in the scanning target, providing the user with multi-angle and all-round observation.
- the perspective provides a more comprehensive and accurate image data for medical personnel, and opens up a new type of blood flow imaging display method for blood flow imaging display technology realized on the ultrasound system.
- the present invention also provides a novel display method for calculating target point fluid velocity vector information, which can more realistically provide the situation data of the actual flow state of the fluid, and intuitively reflect the direction of the target point along the flow direction and the movement according to the flow direction.
- the present invention also provides a more personalized custom service, providing more accurate and more intuitive data support for the user to observe the real fluid state.
- the present invention also provides a display mode in which a grayscale enhancement effect can be presented in an ultrasound stereoscopic image, wherein images of grayscale changes of the region of interest are characterized by different colors, and the flow of the cluster region is dynamically displayed, compared with the conventional display.
- the 3D display effect of the present invention is more vivid, more realistic, and more informative.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Hematology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
一种三维超声流体成像方法及超声成像系统,其系统包括:探头(1);发射电路(2),用于激励上述探头向扫描目标发射体超声波束;接收电路(4)和波束合成模块(5),用于接收所述体超声波束的回波,获得体超声回波信号;数据处理模块(9),用于根据上述体超声回波信号,获取上述扫描目标的至少一部分的三维超声图像数据,并基于上述体超声回波信号,获得上述扫描目标内目标点的流体速度矢量信息;3D图像处理模块(11),用于在上述三维超声图像数据中标记目标点的流体速度矢量信息形成流体速度矢量标识,获得包含流体速度矢量标识的体图像数据;视差图像生成模块(12),用于将上述体图像数据转化为两路视差图像数据;及显示屏显示装置(8),用于接收上述两路视差图像数据并显示用以形成3D超声图像。其通过3D显示技术为用户提供了3D超声图像显示。
Description
本发明涉及超声系统中流体信息成像显示技术,特别是涉及一种三维超声流体成像方法及超声成像系统。
在医学超声成像设备中,通常的流体显示技术仅基于二维图像的显示。以血流成像为例,超声波辐射到被检查的物体之内,彩色多普勒血流仪与脉冲波和连续波多普勒一样,也是利用红细胞与超声波之间的多普勒效应实现显像的。彩色多普勒血流仪包括二维超声显像系统、脉冲多普勒(一维多普勒)血流分析系统、连续波多普勒血流测量系统和彩色多普勒(二维多普勒)血流显像系统。震荡器产生相差为π/2的两个正交信号,分别与多普勒血流信号相乘,其乘积经模/数(A/D)转换器转变成数字信号,经梳形滤波器滤波,去掉血管壁或瓣膜等产生的低频分量后,送入自相关器作自相关检测。由于每次取样都包含了许多个红细胞所产生的多普勒血流信息,因此经自相关检测后得到的是多个血流速度的混合信号。把自相关检测结果送入速度计算器和方差计算器求得平均速度,连同经FFT处理后的血流频谱信息及二维图像信息一起存放在数字扫描转换器(DSC)中。最后,根据血流的方向和速度大小,由彩色处理器对血流资料作为伪彩色编码,送彩色显示器显示,从而完成彩色多普勒血流显示。
通过彩色多普勒血流显示技术,仅显示的是扫描平面上的血流流动速度大小和方向、血流中的流动方式不仅仅只有层流。通常在动脉狭窄处存在涡流等较为复杂的流动情况。二维超声扫描只能反映血流在扫描平面上速度的大小和方向。基于超声二维图像的显示技术,也无法真实再现血管等任何管
状或有液体存储的器官内液体的流动情况,基于二维图像的显示技术其往往是孤立的几个切面,或者通过几个切面而重现的伪三维图像,这些都是无法给医生提供更多的、更全面和精确的检测图像信息的。因此有必要基于目前针对流体成像技术进行改进,提供一种更加直观的流体信息显示方案。
发明内容
基于此,有必要针对现有技术中的不足,提供一种三维超声流体成像方法及超声成像系统,提供了更加直观的流体信息显示方案,并为用户提供了更好的观察视角。
本发明的实施例提供了一种三维超声流体成像方法,其包括:
向扫描目标发射体超声波束;
接收所述体超声波束的回波,获得体超声回波信号;
根据所述体超声回波信号,获取所述扫描目标的至少一部分的三维超声图像数据;
基于所述体超声回波信号,获得所述扫描目标内目标点的流体速度矢量信息;
在所述三维超声图像数据中标记目标点的流体速度矢量信息形成流体速度矢量标识,获得包含流体速度矢量标识的体图像数据;
将所述体图像数据转化为两路视差图像数据;
输出显示所述两路视差图像数据。
一种三维超声流体成像方法,其包括:
向扫描目标发射体超声波束;
接收所述体超声波束的回波,获得体超声回波信号;
根据所述体超声回波信号,通过灰阶血流成像技术,获得所述扫描目标的至少一部分的增强型三维超声图像数据;
分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得云朵状的团簇体区域块;
在所述三维超声图像数据中标记所述云朵状的团簇体区域块形团簇体成,获得包含团簇体的体图像数据;
将所述体图像数据转化为两路视差图像数据;
输出显示所述两路视差图像数据,以使所述团簇体在输出显示时呈现随时间变化的翻滚状视觉效果。
一种三维超声流体成像系统,其包括:
探头;
发射电路,用于激励所述探头向扫描目标发射体超声波束;
接收电路和波束合成模块,用于接收所述体超声波束的回波,获得体超声回波信号;
数据处理模块,用于根据所述体超声回波信号,获取所述扫描目标的至少一部分的三维超声图像数据,并基于所述体超声回波信号,获得所述扫描目标内目标点的流体速度矢量信息;
3D图像处理模块,用于在所述三维超声图像数据中标记目标点的流体速度矢量信息形成流体速度矢量标识,获得包含流体速度矢量标识的体图像数据;
视差图像生成模块,用于将所述体图像数据转化为两路视差图像数据;及
显示屏显示装置,用于接收所述两路视差图像数据并显示。
一种三维超声流体成像系统,其包括:
探头;
发射电路,用于激励所述探头向扫描目标发射体超声波束;
接收电路和波束合成模块,用于接收所述体超声波束的回波,获得体超声回波信号;
数据处理模块,用于根据所述体超声回波信号,通过灰阶血流成像技术,获得所述扫描目标的至少一部分的增强型三维超声图像数据;
3D图像处理模块,用于分割所述增强型三维超声图像数据中用以表征流
体区域的感兴趣区域,获得云朵状的团簇体区域块,在所述三维超声图像数据中标记所述云朵状的团簇体区域块,获得包含云朵状团簇体的体图像数据;
视差图像生成模块,用于将所述体图像数据转化为两路视差图像数据;
显示屏显示装置,用于输出显示所述两路视差图像数据,以使所述团簇体在输出显示时呈现随时间变化的翻滚状视觉效果。
本发明提供了一种基于3D显示技术的超声流体成像方法和系统,可以通过显示屏借助人眼实现3D超声图像的观察效果,并可以在显示时完整展现流体运动情况,给观察者提供更多的观测视角。
图1为本发明一个实施例的超声成像系统的框图示意图;
图2为本发明一个实施例的垂直发射的平面超声波束的示意图;
图3为本发明一个实施例的偏转发射的平面超声波束的示意图;
图4为本发明一个实施例的聚焦超声波束的示意图;
图5为本发明一个实施例中发散超声波束的示意图;
图6(a)为二维面阵探头阵元示意图,图6(b)为本发明中利用二维面阵探头沿某一超声波传播方向进行三维图像扫描的示意图,图6(c)为图6(b)中扫描体相对偏移量的度量方式示意图;
图7(a)为本发明一个实施例中二维面阵探头阵元分区的示意图,图7(b)为本发明一个实施例中体聚焦超声波发射的示意图;
图8中图8(a)为本发明其中一个实施例的速度矢量标识显示方法流程示意图,图8(b)为本发明其中一个实施例的团簇体显示方法流程示意图;
图9为本发明其中一个实施例的方法流程示意图;
图10为本发明其中一个实施例的方法流程示意图;
图11(a)为本发明的其中一个实施例中第一模式下流体速度矢量信息计算示意图;
图11(b)为本发明的其中一个实施例中第二模式下流体速度矢量信息计
算示意图;
图12(a)为本发明一个实施例中两个超声波传播方向发射的示意图;
图12(b)为基于图12(a)所示的流体速度矢量信息合成示意图;
图12(c)为本发明的其中一个实施例中斑点计算流体速度矢量的示意图;
图12(d)为本发明的其中一个实施例中8点插值法的示意图;
图13(a)为本发明的其中一个实施例中体图像数据的第一种效果示意图;
图13(b)为本发明的其中一个实施例中体图像数据的第二种效果示意图;
图14为本发明的其中一个实施例中体图像数据的第三种效果示意图;
图15为本发明的其中一个实施例中空间立体显示装置的结构示意图;
图16为本发明的其中一个实施例中空间立体显示装置的结构示意图;
图17为本发明的其中一个实施例中空间立体显示装置的结构示意图;
图18为本发明的其中一个实施例中基于第一模式的体图像数据的种效果示意图;
图19为本发明的其中一个实施例中基于第二模式的体图像数据的种效果示意图;
图20为本发明的其中一个实施例中体图像数据的第三种效果示意图;
图21(a)为本发明的其中一个实施例中具有云朵状团簇体的成像效果示意图,图21(b)为本发明的其中一个实施例中云朵状团簇体叠加血流速度矢量标记的成像效果示意图,图21(c)为本发明的其中一个实施例中云朵状团簇体叠加色彩信息的效果示意图;
图22为本发明一个实施例中目标点被选中形成轨迹的效果示意图;
图23为本发明一个实施例中体图像数据转化为两路视差图像的示意图;
图24为本发明另一个实施例中体图像数据转化为两路视差图像的示意图;
图25为本发明一个实施例中一种人机交互方式的结构示意图;
图26为本发明的一个实施例中利用虚拟相机进行视差图像转化的示意图;
图27(a)为本发明的一个实施例中两路视差图像输出显示时,裸眼观察到的虚拟3D超声图像中显示有随时间变化呈翻滚状团簇体的效果图;
图27(b)为本发明的一个实施例中两路视差图像输出显示时,裸眼观察到的虚拟3D超声图像中显示有随时间变化呈流动状血流速度矢量标记的效果图。
图1为本发明一个实施例的超声成像系统的结构框图示意图。如图1所示,该超声成像系统通常包括:探头1、发射电路2、发射/接收选择开关3、接收电路4、波束合成模块5、信号处理模块6、图像处理模块7和显示屏显示装置8。
在超声成像过程中,发射电路2将经过延迟聚焦的具有一定幅度和极性的发射脉冲通过发射/接收选择开关3发送到探头1。探头1受发射脉冲的激励,向扫描目标(例如,人体或者动物体内的器官、组织、血管等等,图中未示出)发射超声波,经一定延时后接收从目标区域反射回来的带有扫描目标的信息的超声回波,并将此超声回波重新转换为电信号。接收电路接收探头1转换生成的电信号,获得体超声回波信号,并将这些体超声回波信号送入波束合成模块5。波束合成模块5对体超声回波信号进行聚焦延时、加权和通道求和等处理,然后将体超声回波信号送入信号处理模块6进行相关的信号处理。
经过信号处理模块6处理的体超声回波信号送入图像处理模块7。图像处理模块7根据用户所需成像模式的不同,对信号进行不同的处理,获得不同模式的图像数据,例如,二维图像数据、和三维超声图像数据。然后经对数压缩、动态范围调整、数字扫描变换等处理形成不同模式的超声图像数据,如包括B图像,C图像,D图像等二维图像数据,以及可以送入显示设备进行三维图像或3D立体图像显示的三维超声图像数据。
图像处理模块7将生成的三维超声图像数据送入3D图像处理模块11中
进行标记、分割等处理后获得体图像数据,体图像数据是具有体像素信息的单帧图像或多帧影像。
体图像数据通过视差图像生成模块12后获得两路视差图像数据,该两路视差图像数据在显示屏显示装置8上进行显示,显示屏显示装置8基于3D显示技术,利用人眼的左右眼视差,使人眼对显示屏显示装置8上显示的图像进行重构后获得虚拟的扫描目标的3D立体图像(下文称3D超声图像)。显示屏显示装置8分为眼镜式显示设备和裸眼式显示设备两大类。眼镜式显示设备利用平面显示屏配合3D眼镜共同实现。裸眼式显示设备,即裸眼3D显示器,其由3D立体现实终端、播放软件、制作软件、应用技术四部分组成,是集光学、摄影、电子计算机,自动控制、软件、3d动画制作等现代高科技技术于一体的交差立体现实系统。
上述信号处理模块6和图像处理模块7可以利用一个处理器或者多个处理器来实现,3D图像处理模块11也可以与上述信号处理模块6和图像处理模块7集成利用一个处理器或者多个处理器来实现,或者设立独立的处理器来实现3D图像处理模块11。上述视差图像生成模块12可以利用纯软件程序来实现,或者还可以利用硬件结合软件程序来实现,下文将具体说明。
探头1通常包括多个阵元的阵列。在每次发射超声波时,探头1的所有阵元或者所有阵元中的一部分参与超声波的发射。此时,这些参与超声波发射的阵元中的每个阵元或者每部分阵元分别受到发射脉冲的激励并分别发射超声波,这些阵元分别发射的超声波在传播过程中发生叠加,形成被发射到扫描目标的合成超声波束,该合成超声波束的方向即为本文中所提到的超声波传播方向。参与超声波发射的阵元可以同时被发射脉冲激励;或者,参与超声波发射的阵元被发射脉冲激励的时间之间可以有一定的延时。通过控制参与超声波的发射的阵元被发射脉冲激励的时间之间的延时,可改变上述合成超声波束的传播方向,下文将具体说明。
通过控制参与超声波的发射的阵元被发射脉冲激励的时间之间的延时,也可以使参与超声波的发射的各个阵元发射的超声波在传播过程中不会聚
焦,也不会完全发散,而是形成整体上大体上为平面的平面波。本文中,称这种无焦点的平面波为“平面超声波束”。
或者,通过控制参与超声波的发射的阵元被发射脉冲激励的时间之间的延时,可以使各个阵元发射的超声波束在预定位置叠加,使得在该预定位置处超声波的强度最大,也就是使各个阵元发射的超声波“聚焦”到该预定位置处,该聚焦的预定位置称为“焦点”,这样,获得的合成的超声波束是聚焦到该焦点处的波束,本文中称之为“聚焦超声波束”。例如,图4为发射聚焦超声波束的示意图。这里,参与超声波的发射的阵元(图4中,仅仅探头1中的部分阵元参与了超声波的发射)以预定的发射时延(即参与超声波的发射的阵元被发射脉冲激励的时间之间存在预定的时延)的方式工作,各阵元发射的超声波在焦点处聚焦,形成聚焦超声波束。
又或者,通过控制参与超声波的发射的阵元被发射脉冲激励的时间之间的延时,使参与超声波的发射的各个阵元发射的超声波在传播过程中发生发散,形成整体上大体上为发散波。本文中,称这种发散形式的超声波为“发散超声波束”。如图5所示的发散超声波束。
线性排列的多个阵元同时给予电脉冲信号激励,各个阵元同时发射超声波,合成的超声波束的传播方向与阵元排列平面的法线方向一致。例如,如图2所示的垂直发射的平面波,此时参与超声波的发射的各个阵元之间没有时延(即各阵元被发射脉冲激励的时间之间没有时延),各个阵元被发射脉冲同时激励。生成的超声波束为平面波,即平面超声波束,并且该平面超声波束的传播方向与探头1的发射出超声波的表面大体垂直,即合成的超声波束的传播方向与阵元排列平面的法线方向之间的角度为零度。但是,如果施加到各个阵元间的激励脉冲有一个时间延时,各个阵元也依次按照此时间延时发射超声波束,则合成的超声波束的传播方向与阵元排列平面的法线方向就具有一定的角度,即为合成波束的偏转角度,改变上述时间延时,也就可以调整合成波束的偏转角度的大小和在合成波束的扫描平面内相对于阵元排列平面的法线方向的偏转方向。例如,图3所示为偏转发射的平面波,此时
参与超声波的发射的各个阵元之间有预定的时延(即各阵元被发射脉冲激励的时间之间有预定的时延),各个阵元被发射脉冲按照预定的顺序激励。生成的超声波束为平面波,即平面超声波束,并且该平面超声波束的传播方向与探头1的阵元排列平面的法线方向成一定的角度(例如,图3中的角a),该角度即为该平面超声波束的偏转角度。通过改变时延时间,可以调整角a的大小。
同理,无论是平面超声波束、聚焦超声波波束还是发散超声波束,均可以通过调整控制参与超声波的发射的阵元被发射脉冲激励的时间之间的延时,来调整合成波束的方向与阵元排列平面的法线方向之间所形成的合成波束的“偏转角度”,这里的合成波束可以为上文提到的平面超声波束、聚焦超声波波束或发散超声波束等等。
此外,在实现三维超声成像时,如图6(a)所示,采用面阵探头,每个面阵探头看作多个阵元112按照横纵两个方向排列形成,对应于面阵探头中的每个阵元都配置相应的延迟控制线用于调整每个阵元的时延,在发射与接收超声波束的过程中只要改变每个阵元不同的时延时间,就可以对超声波束进行声束控制和动态聚焦,从而改变合成超声波束的传播方向指向,实现超声波束在三维空间内的扫描,形成立体三维超声图像数据库。又如图6(b)所示,面阵探头1中包括多个阵元112,通过改变参与超声波发射的阵元对应的时延时间,可以使发射的体超声波束沿虚线箭头F51所示的方向传播、并在三维空间内形成用于获取三维超声图像数据的扫描体A1(图6(b)中虚线绘制的立体结构),此扫描体A1相对于参考体A2(图6(b)中实线绘制的立体结构)具有预定的偏移量,这里的参考体A2为:使参与超声波发射的阵元发射的超声波束、沿阵元排列平面的法线(图6(b)中的实线箭头F52)所在方向传播,并在三维空间内形成的扫描体A2。可见,上述扫描体A1相对于参考体A2具有的偏移量,用于衡量沿不同超声波传播方向传播形成的扫描体、相对于参考体A2的一个三维空间中的偏转角,本文中该偏移量可通过以下两个角度来组合度量:第一,在扫描体内,超声波束形成的扫描平面A21(图6(b)
中虚线绘制的四边形)上超声波束的传播方向与阵元排列平面的法线具有一预定的偏转角度Φ,偏转角度Φ在[0,90°)范围内选择;第二,如图6(c),在阵元排列平面P1上的平面直角坐标系中,从X轴逆时针旋转到超声波束的传播方向在阵元排列平面P1上的投影P51(图6(c)中平面P1内的点划线箭头)所在直线处而形成的旋转夹角θ,此旋转夹角θ在[0,360°)范围内选择。当偏转角度Φ为零时,上述扫描体A1相对于参考体A2具有的偏移量为零。在实现三维超声成像时,通过改变每个阵元不同的时延时间,可以改变上述偏转角度Φ和旋转夹角θ的大小,从而调整上述扫描体A1相对于参考体A2的偏移量,实现在三维空间内沿不同的超声波传播方向形成不同的扫描体。上述扫描体的发射还可以用通过线阵探头排列成阵列形式的探头组合结构等替代,而发射方式相同。例如图6(b),扫描体A1返回的体超声回波信号对应获得三维超声图像数据B1,扫描体A2返回的体超声回波信号对应获得三维超声图像数据B2。
本文中将“向扫描目标发射的在扫描目标所在的空间内传播用以形成上述扫描体”的超声波束视为体超声波束,其可以包括一次或多次发射的超声波束的集合。那么根据超声波束的类型,“向扫描目标发射的在扫描目标所在的空间内传播用以形成上述扫描体”的平面超声波束视为体平面超声波束,“向扫描目标发射的在扫描目标所在的空间内传播用以形成上述扫描体”的聚焦超声波束视为体聚焦超声波束,“向扫描目标发射的在扫描目标所在的空间内传播用以形成上述扫描体”的发散超声波束视为体发散超声波束,等等,体超声波束可以包括体平面超声波束、体聚焦超声波束、体发散超声波束等,依次类推,可在“体”和“超声波束”之间冠以超声波束的类型名称。
体平面超声波束通常几乎覆盖探头1的整个成像区域,因此使用体平面超声波束成像时,一次发射就可以得到一帧三维超声图像,因此成像帧率可以很高。而使用体聚焦超声波束成像时,因为波束聚焦于焦点处,因此每次扫描体内只能得到一根或者几根扫描线,需要多次发射后才能得到成像区域内的所有扫描线,从而组合所有扫描线获得成像区域的一帧三维超声图像。
因此,使用体聚焦超声波束成像时帧率相对较低。但是体聚焦超声波束每次发射的能力较集中,而且仅在能力集中处成像,因此获得的回波信号信噪比高,可用以获得质量较好的组织结构超声图像测量数据。
基于超声三维成像技术和3D显示技术,本发明通过将3D超声图像和流体的流体速度矢量信息进行叠加的显示方式,为用户提供了更好的观察视角,既能够实时的了解扫描位置处的如血流流速和流向信息等流体信息,且还可以使人眼观察到更加立体、近似逼真的虚拟3D超声图像,并且立体再现流体流动的行经路线信息。本文涉及的流体可以包括:血流、肠道液体、淋巴液、组织液、细胞液等体液。以下将具体结合附图详细说明本发明的各个实施例方式。
如图8所示,本实施例提供了一种三维超声流体成像方法,其基于三维超声成像技术,通过3D显示技术将超声图像显示在显示屏上,并通过人眼的观察再现立体、近似逼真的3D成像效果,可以为用户提供了更好的观察视角,为用户提供更加丰富多彩、不同于传统显示方式的视觉享受,从而能够实时的清晰了解扫描位置的真实位置,且还可以使图像显示效果更加真实的显现流体信息,为医护人员提供更为全面、更为精准的图像分析结果,为在超声系统上实现的流体成像显示技术开创了又一更加新型的三维成像显示方式。
图8中图8(a)为本发明其中一个实施例中,在三维超声流体成像方法中显示速度矢量标识的流程示意图;图8(b)为本发明其中一个实施例中,在三维超声流体成像方法中,显示团簇体的流程示意图,两者的部分步骤相同,且部分步骤还可以相互包含,具体可参见下文的详细说明。
在步骤S100中,发射电路2激励探头1向扫描目标发射体超声波束,使体超声波束在扫描目标所在的空间内传播用以形成如图6所示扫描体。在本发明的其中一些实施例中,上述探头1为面阵探头,或者还可以为通过线阵探头排列成阵列形式的探头组合结构,等等。利用面阵探头或阵列式探头组合结构可以保证在同一次扫描时及时获得一个扫描体的反馈数据,提升扫描
速度和成像速度。
本文中向扫描目标发射的体超声波束可以包括:体聚焦超声波束、体非聚焦超声波束、体虚源超声波束、体非衍射超声波束、体发散超声波束或体平面超声波束等多种类型波束中的至少一种波束或者至少两种以上波束的组合(这里的“以上”包括本数,以下同)。当然,本发明的实施例中不限于以上几种类型的体超声波束。
在本发明的其中一些实施例中,如图9所示,采用体平面波的扫描方式可以节省三维超声图像的扫描时间,提高成像帧率,从而实现高帧率的流体速度矢量成像。因此,在步骤S100中包括步骤S101:向扫描目标发射体平面超声波束。在步骤201中,接收该体平面超声波束的回波,可以获得体平面超声回波信号,根据该体平面超声回波信号可以用以重建三维超声图像数据、和/或计算扫描目标内目标点的流体速度矢量信息。本文提到的流体速度矢量信息至少包含目标点的速度矢量(即速度大小和速度方向),流体速度矢量信息还可以包含目标点的相应位置信息。当然,流体速度矢量信息还可以包括可以根据速度大小和速度方向获得的任何关于目标点有关速度的其他信息,比如加速度信息等等。例如,在图9中,在步骤301中,根据体平面超声回波信号,获取扫目标的至少一部分的三维超声图像数据;在步骤S401中,基于体平面超声回波信号,获得扫描目标内目标点的流体速度矢量信息。
扫描目标可以为人体或者动物体内的器官、组织、血管等等具有流动物质的管状组织结构,而扫描目标内的目标点可以为扫描目标内感兴趣的点或者位置,通常表现为,在显示屏显示装置上展示的、基于扫描目标的体图像数据转化成的两路视差图像数据中的相应位置,这一位置基于图像转化映射关系可以对应于在虚拟的3D超声图像中可被标记或者可被显示的感兴趣的虚拟空间点或者虚拟空间位置,可以是一个虚拟空间点或一个虚拟空间点的邻域空间范围,下文同。由于3D超声图像是虚拟的图像,所以目标点对应3D超声图像中的虚拟空间点或者虚拟空间位置,通过空间图像的映射关系,目标点对应于显示屏显示图像上的相应映射位置,即两路视差图像数据中相
应的像素或像素的邻域范围,同时也对应于三维超声图像数据中体像素或体像素的领域范围。
或者,在步骤S100中,可以通过向扫描目标发射体聚焦超声波束,使体聚焦超声波束在扫描目标所在的空间内传播用以形成扫描体,从而在步骤S200中通过接收该体聚焦超声波束的回波,可以获得体聚焦超声回波信号,根据该体聚焦超声回波信号可以用以重建三维超声图像数据、和/或计算扫描目标内目标点的流体速度矢量信息。
又或者,如图10所示,在步骤S100中包括步骤S101和步骤S102,即在步骤S101中,向扫描目标发射体平面超声波束,用以在步骤201中,接收该体平面超声波束的回波,可以获得体平面超声回波信号,并在步骤S401中基于该体平面超声回波信号,获得扫描目标内目标点的流体速度矢量信息。在步骤S102中,向扫描目标发射体聚焦超声波束,用以在步骤202中接收该聚焦超声波束的回波,可以获得聚焦体超声回波信号,并在步骤S302中根据该体聚焦超声回波信号,获得扫描目标的至少一部分的三维超声图像数据。体聚焦超声回波信号可用作重建高质量的三维超声图像数据,以求获取质量较好的三维超声图像数据作为表征组织结构的背景图像。
在步骤S100中若采用两种类型的体超声波束,则向扫描目标交替发射两种体超声波束。例如,在向扫描目标发射体平面超声波束的过程中插入向扫描目标发射体聚焦超声波束的过程,即,交替执行如图10所示的步骤S101和步骤S102。这样可以保证两种体超声波束图像数据获取的同步性,提高在背景图像上叠加目标点的流体速度矢量信息的精确度。
在步骤S100中,为获得计算目标点的流体速度矢量信息的体超声回波信号,可按照多普勒成像技术向扫描目标发射体超声波束,例如,沿一个超声波传播方向向扫描目标发射体超声波束,使体超声波束在扫描目标所在的空间内传播用以形成一个扫描体。然后根据从这一个扫描体反馈的体超声回波信号来获取用以计算目标点流体速度矢量信息的三维超声图像数据。
当然,为了使目标点流体速度矢量信息的计算结果更加真实、更加逼真
的再现目标点在真实三维空间中的速度矢量,则在本发明的一些实施例中,可以沿多个超声波传播方向向扫描目标发射体超声波束,用以形成多个扫描体,其中,每个扫描体源自一个超声波传播方向上发射的体超声波束。根据从这多个扫描体反馈的体超声回波信号来获取用以计算目标点流体速度矢量信息的图像数据。例如,在步骤S200和步骤S400中包括:
首先,接收来自多个扫描体上体超声波束的回波,获得多组体超声回波信号;
然后,基于多组体超声回波信号中的一组体超声回波信号,计算扫描目标内目标点的一个速度分量,依据多组体超声回波信号分别获取多个速度分量;
其次,根据多个速度分量,合成获得目标点的速度矢量,生成目标点的流体速度矢量信息。
多个超声波传播方向包括两个以上的超声波传播方向,“以上”包含本数,下文同。
针对沿多个超声波传播方向向扫描目标发射超声波束的过程中,可以按照超声波传播方向的不同交替执行向扫描目标发射体超声波束的过程。例如,若沿两个超声波传播方向向扫描目标发射体超声波束,则先沿第一个超声波传播方向向扫描目标发射体超声波束,然后再沿第二个超声波传播方向向扫描目标发射体超声波束,完成一个扫描周期,最后依次重复上述扫描周期过程。或者,还可以先沿一个超声波传播方向向扫描目标发射体超声波束,再沿另一个超声波传播方向向扫描目标发射体超声波束,依次执行完所有超声波传播方向后完成扫描过程。为获取不同的超声波传播方向,可通过改变参与超声波发射的阵元中的每个阵元或者每部分阵元的时延来获得,具体可参照图2至图6(a)-图6(c)的解释。
例如,在沿多个超声波传播方向向扫描目标发射体平面超声波束的过程可以包括:向扫描目标发射第一体超声波束,此第一体超声波束具有第一超声波传播方向;和向扫描目标发射第二体超声波束,此第二体超声波束具有
第二超声波传播方向。分别接收第一体超声波束的回波和第二体超声波束的回波,获得第一体超声回波信号和第二体超声回波信号,根据此两组体超声回波信号获得两个速度分量,合成后获得目标点的流体速度矢量。有关超声波传播方向的设置可参见前文有关图2的详细说明。在其中一些实施例中,第一体超声波束和第二体超声波束可以为平面超声波束,对应的第一体超声回波信号和第二体超声回波信号变更为第一体平面超声回波信号和第二体平面超声回波信号。
又例如,在沿多个超声波传播方向向扫描目标发射体平面超声波束的过程还可以包括:沿N个(N取大于等于3的任意一个自然数)超声波传播方向向扫描目标发射体超声波束,用以接收此体超声波束的回波,获得N组(N取大于等于3的任意一个自然数)体超声回波信号,而每组体超声回波信号源自一个超声波传播方向上发射的体超声波束。此N组体超声回波信号可以用于计算目标点的流体速度矢量信息。
此外,在本发明的一些实施例中,可以通过激励部分或全部超声波发射阵元沿一个或多个超声波传播方向向扫描目标发射体超声波束,使体超声波束在扫描目标所在的空间内传播用以形成扫描体。例如,本实施例中的体超声波束可以为体平面超声波束。
又或者,在本发明的其中一些实施例中,如图7(a)和图7(b)所示,可以通过将超声波发射阵元分成多块阵元区111,激励部分或全部阵元区沿一个或多个超声波传播方向向扫描目标发射体超声波束,使体超声波束在扫描目标所在的空间内传播用以形成扫描体,其中,每个扫描体源自一个超声波传播方向上发射的体超声波束。有关扫描体的形成原理可参见前文中有关图6(a)-图6(c)的详细说明,在此不累述。例如,本实施例中的体超声波束可以包括体聚焦超声波束、体平面超声波束等中的一种,但不限于此几类超声波束类型。当本实施例中的体超声波束采用体聚焦超声波束时,可以将超声波发射阵元分成多块阵元区后,激励其中一块阵元区可以产生一根聚焦超声波束,而同时激励多块阵元区则可以同时产生多根聚焦超声波束,形成体聚
焦超声波束,获得一个扫描体。如图7(a)和图7(b)所示,以聚焦超声波束的发射为例,每个阵元区111用于产生至少一根聚焦超声波束(图中带箭头的弧线),于是在多个阵元区111同时激发产生聚焦超声波束时,可使多根聚焦超声波束在扫描目标所在的空间内传播形成一个由体聚焦超声波束形成的扫描体11,扫描体11内位于同一平面内的聚焦超声波束形成一个扫描平面113(图中实线箭头所示,每个实线箭头表示一根聚焦超声波束),而扫描体11也可以看作是由多个扫描平面113构成。通过改变每个阵元区111中参与发射超声波的发射阵元的时延大小,可以改变聚焦超声波束的指向,从而改变多根聚焦超声波束在扫描目标所在空间内的传播方向。
在本发明的其中一些实施例中,沿每个超声波传播方向向扫描目标发射多次体超声波束,用以获得多次体超声回波信号,供后续针对体超声回波信号的超声图像数据处理。例如,沿多个超声波传播方向分别向扫描目标发射多次体平面超声波束、或者沿一个或多个超声波传播方向分别向扫描目标发射多次体聚焦超声波束。而每一次体超声波束的发射对应获得一次体超声回波信号。
按照超声波传播方向的不同交替执行向扫描目标发射多次体超声波束的过程,能使获得的回波数据计算同一时刻的目标点的速度矢量,提高流体速度矢量信息的计算精度。例如,若沿三个超声波传播方向分别向扫描目标发射N次体超声波束,可以先沿第一个超声波传播方向向扫描目标发射至少一次体超声波束,然后再沿第二个超声波传播方向向扫描目标发射至少一次体超声波束,其次再沿第三个超声波传播方向向扫描目标发射至少一次体超声波束,完成一个扫描周期,最后依次重复上述扫描周期过程直至完成所有超声波传播方向上的扫描次数。同一个扫描周期内不同超声波传播方向下的体超声波束的发射次数可以相同,也可以不相同。例如,如果是沿两个超声波传播方向的发射体超声波束,则按照A1 B1 A2 B2 A3 B3 A4 B4......Ai Bi,以此类推。其中,Ai是第一个超声波传播方向中的第i次发射;Bi是第二个超声波传播方向中的第i次发射。而如果是沿三个超声波传播方向的发射
体超声波束,则按照A1 B1 B2C1 A2 B3 B4C2 A3 B5 B6C3......Ai Bi Bi Ci,以此类推。其中Ai是第一个超声波传播方向中的第i次发射;Bi是第二个超声波传播方向中的第i次发射;Ci是第三个超声波传播方向中的第i次发射。
此外,当上述步骤S100中若选择向扫描目标发射两种类型的超声波束时,可以采用交替发射两种的超声波束的方式。例如,在本发明的其中一些实施例中,上述步骤S100包括:
首先,向扫描目标发射多次体聚焦超声波束,用以获取重建的三维超声图像数据;
然后,沿一个或多个超声波传播方向向扫描目标发射多次体平面超声波束,用以获取计算目标点速度矢量的图像数据。
基于此,可以在向扫描目标发射体平面超声波束的过程中插入向扫描目标发射体聚焦超声波束的过程。比如,将向扫描目标发射的多次体聚焦超声波束均匀插入到执行上述多次体平面超声波束的发射过程中。
例如,上述连续的“Ai Bi Ci”的体平面超声波束发射过程主要针对用于获得计算目标点的速度信息的数据,而对于用以获取重建三维超声图像的另一种类型的体超声波束的发射,则采用插入到上述连续的“Ai Bi Ci”的发射过程中的方式,以下以在上述连续的“Ai Bi Ci”的体平面超声波束发射过程插入向扫描目标发射多次体聚焦超声波束为例,详细解释交替发射两种类型波束的方式。
按照以下顺序沿三个超声波传播方向分别向扫描目标发射多次体平面超声波束,
A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3......Ai Bi CiDi,以此类推;
其中,Ai是第一个超声波传播方向中的第i次发射;Bi是第二个超声波传播方向中的第i次发射;Ci是第三个超声波传播方向中的第i次发射;Di是第i次体聚焦超声波束的发射。
上述方法给出了一种比较简单的插入体聚焦超声波束的发射过程的方
式,还可以是在沿不同的超声波传播方向发射完多次体平面超声波束之后插入一次体聚焦超声波束的发射,或者,上述向扫描目标发射多次体平面超声波束的至少一部分与上述向扫描目标发射多次体聚焦超声波束的至少一部分交替执行,等等。还可以是任何一种能实现上述向扫描目标发射多次体平面超声波束的至少一部分与上述向扫描目标发射多次体聚焦超声波束的至少一部分交替执行方案的任何一种交替发射方式。本实施例中可以利用体聚焦超声波束获得质量较好的三维超声图像数据;而可以利用体平面超声波束帧率高的特点获得高实时性的流体速度矢量信息,而且为了在数据获取上两者具有更好的同步性,采用两种类型的超声波形交替发射的方式。
因此,沿不同超声波传播方向向扫描目标发射多次体超声波束的执行顺序和规则可以任意选择,在此不一一列举,但也不限于上文提供的各个具体实施例。
在步骤S200中,接收电路4和波束合成模块5接收上述步骤S100发射的体超声波束的回波,获得体超声回波信号。
上述步骤S100中采用何种类型的体超声波束,那么步骤S200中对应接收何种类型的体超声波束的回波,生成对应类型的体超声回波信号。例如,当接收步骤S100中发射的体聚焦超声波束的回波,则获得体聚焦超声回波信号;当接收步骤S100中发射的体平面超声波束的回波,则获得体平面超声回波信号,依次类推,在“体”和“超声回波信号”之间冠以超声波束的类型名称。
接收电路4和波束合成模块5接收上述步骤S100发射的体超声波束的回波时,可以利用参与超声波发射的阵元中的每个阵元或者每部分阵元分时实现发射和接收功能时接收上述步骤S100发射的体超声波束的回波,或者将探头上的阵元分为接收部分和发射部分、然后利用参与超声波接收的阵元中的每个阵元或者每部分阵元接收上述步骤S100发射的体超声波束的回波,等等。有关体超声波束的接收以及体超声回波信号的获得可参见本领域常用方式。
在步骤S100中沿每个超声波传播方向上发射体超声波束时,步骤S200
中接收该体超声波束的回波,对应获得一组体超声回波信号。例如,当接收步骤S100中沿一个超声波传播方向向扫描目标发射的体超声波束的回波,则在步骤S200中获得一组体超声回波信号,对应的在步骤S300和步骤S400中,依据相应的一组体超声回波信号,分别获取扫描目标的至少一部分的三维超声图像数据和目标点的流体速度矢量信息;而当步骤S200中接收沿多个超声波传播方向向扫描目标发射的体超声波束的回波,则在步骤S200中获得多组体超声回波信号,而其中每组体超声回波信号源自一个超声波传播方向上发射的体超声波束的回波。那么,对应的在步骤S300和步骤S400中,则依据该其中一组体超声回波信号获取扫描目标的至少一部分的三维超声图像数据,并可以通过多组体超声回波信号获取目标点的流体速度矢量信息。
此外,沿每个超声波传播方向上可以发射多次体超声波束时,步骤S200中接收该体超声波束的回波,对应获得的一组体超声回波信号中包括多次体超声回波信号,其中,一次体超声波束的发射对应获得一次体超声回波信号。
例如,对于步骤S100中沿多个超声波传播方向分别向扫描目标发射多次体平面超声波束,则在步骤S200中可以分别接收上述多个超声波传播方向对应的体平面超声波束的回波,获得多组体平面超声回波信号;其中每组体平面超声回波信号包括多次体平面超声回波信号,每次体平面超声回波信号源自沿一个超声波传播方向上执行一次向扫描目标发射体平面超声波束的步骤所获得的回波。
又例如,对于步骤S100中向扫描目标发射多次体聚焦超声波束,则步骤S200中接收上述体聚焦超声波束的回波,获得多组体聚焦超声回波信号。
所以,步骤S100中采用何种类型的体超声波束发射相应次数,那么步骤S200中对应接收何种类型的体超声波束的回波,生成相应组数的对应类型的体超声回波信号。
在步骤S300中,图像处理模块7根据体超声回波信号获取扫描目标的至少一部分的三维超声图像数据。根据体超声回波信号采用3D波束合成成像,则可以获得如图6(b)所示的三维超声图像数据B1和B2,其可以包括:空间
点的位置信息及该空间点对应的图像信息,该图像信息包括空间点的灰度属性、颜色属性等其他特征信息。
在本发明的一些实施例中,三维超声图像数据可以使用体平面超声波束成像,也可以使用体聚焦超声波束成像。但是由于体聚焦超声波束每次发射的能力较集中,而且仅在能力集中处成像,因此获得的回波信号信噪比高,获得的三维超声图像数据质量较好,而且体聚焦超声波束的主瓣狭窄,旁瓣较低,获得的三维超声图像数据的横向分辨率也较高。所以,在本发明的一些实施例中,步骤S500的三维超声图像数据可以使用体聚焦超声波束成像。同时为了获得更加高质量的三维超声图像数据,可以在步骤S100中发射多次发射体聚焦超声波束,来实现扫描获得一帧三维超声图像数据。
当然,根据前文所述的步骤S200中获得的体平面超声回波信号,获取上述三维超声图像数据。当上述步骤S200中获得多组体超声回波信号时,则可以选择一组体超声回波信号,用以获取扫描目标的至少一部分的三维超声图像数据。或者,基于多组体超声回波信号获得图像数据优化后的三维超声图像数据。
为了能在3D超声图像中呈现流体的整体移动情况,则在步骤S300中,还可以包括:图8(b)中的步骤S310,根据体超声回波信号,通过灰阶血流成像技术,获得扫描目标的至少一部分的增强型三维超声图像数据。或者,在对于图8(b)所示的对团簇体的动态显示方法中在步骤S200之后采用步骤S310。灰阶血流成像技术或称二维血流显示技术,是利用数字编码超声技术对血流、血管及周围软组织进行观察,并以灰阶方式显示的一种新的影像技术。
上述各个实施例中对三维超声图像数据的处理,可以理解为对整体三维超声图像数据库进行的三维数据处理,也可以理解为,对包含在一帧三维超声图像数据中的一幅或多幅二维超声图像数据的分别处理之后的集合。所以,在本发明的一些实施例中,在步骤S310中可以包括:通过灰阶血流成像技术对包含在一帧三维超声图像数据中的一幅或多幅二维超声图像数据分别进行
处理后,汇集获得扫描目标的增强型三维超声图像数据。
在步骤S400中,图像处理模块7用于基于上述步骤S200获得的体超声回波信号,获得扫描目标内的目标点的流体速度矢量信息。这里提到的流体速度矢量信息包含目标点的速度矢量(即速度大小和速度方向),和/或目标点在三维超声图像数据中的相应位置信息。根据三维超声图像数据在步骤S600中转化为两路视差图像数据的图像映射关系,则根据目标点在三维超声图像数据中的相应位置信息,可以对应获得目标点分别在两路视差图像数据中的相应位置信息。反之,亦可基于图像映射关系,根据目标点分别在两路视差图像数据中的相应位置信息,获得目标点在三维超声图像数据中的相应位置信息。
在本实施例中,目标点可供用户选择,利用通过人机交互设备获取用户输入的指令,来设定目标点在扫描目标内的分布密度或目标点的位置(包括选择目标点的位置、或用以计算目标点流体速度矢量的初始位置)。例如,通过移动图像中显示的光标、或者通过手势输入来选择分布密度,获取用户输入的分布密度指令,依据分布密度指令在所述扫描目标内随机选择所述目标点;和/或,通过移动图像中显示的光标、或者通过手势输入来选择目标点位置,获取用户输入的标记位置指令,依据所述标记位置指令获得所述目标点。目标点包括一个或多个离散分布的体像素或体像素的领域范围或数据块,分布密度是指目标点在一预定区域范围可能出现的大小,而这一预定区域范围可以为扫描目标的整体立体区域范围,也可以是扫描目标的部分区域范围,即用于计算下述第二模式下目标点的速度矢量时的初始位置所在范围。
但本发明不限于此。例如,还可以根据系统预先设定的分布密度在扫描目标内随机选择目标点的位置、或用以计算目标点流体速度矢量的初始位置。通过这种方式可以给予用户灵活选择的方式,提升使用体验度。
在步骤400中包括的基于体超声回波信号获得扫描目标内目标点的流体速度矢量信息的过程,下文中将详细解释说明。
在步骤S400中计算获得的目标点的流体速度矢量信息,主要用于在三维
超声图像数据上标记,因此根据流体速度矢量信息的不同显示方式,在步骤S400中可以获得不同的流体速度矢量信息。
例如,在本发明的其一些实施例中,上述步骤S400中包括:根据上述步骤S200中获得的体超声回波信号,计算目标点位于不同时刻的三维超声图像数据中第一显示位置处的流体速度矢量,用以获得目标点位于不同时刻的三维超声图像数据中的流体速度矢量信息。那么在图像显示时,可以是针对各个时刻的三维超声图像数据中第一显示位置处的流体速度矢量信息。如图11(a)所示,根据上述步骤S200中获得的体超声回波信号,可以分别获得t1、t2、......、tn时刻对应的三维超声图像数据P1、P2、......、Pn中,然后计算目标点在各个时刻对应的三维超声图像数据中第一显示位置处(图中黑色球体的位置)的流体速度矢量。本实施例中,目标点在各个时刻对应的三维超声图像数据中第一显示位置始终位于三维超声图像数据中的位置(X1、Y1、Z1)处。基于此,在后续步骤S500中标记流体速度矢量信息时,即在三维超声图像数据中位置(X1、Y1、Z1)处标记不同时刻对应计算的流体速度矢量。若目标点参照上述具体实施例中根据用户自主选择部分或全部、或者由系统默认,那么对应就可以获知相应的第一显示位置,并通过计算当前时刻对应的三维超声图像数据中第一显示位置处的流体速度矢量信息用以标记,本文中将这种显示模式称为第一模式,下文同。
在本发明的另一些实施例中,上述步骤S400中包括:根据上述步骤S200中获得的体超声回波信号,计算目标点连续移动到三维超声图像数据中相应位置处而依次获得的流体速度矢量,从而获取目标点的流体速度矢量信息。在本实施例中,通过重复计算目标点在一时间间隔内从一位置移动到三维超声图像数据的另一位置处的流体速度矢量,用以获得目标点从初始位置开始连续移动后在三维超声图像数据中各个相应位置处对应的流体速度矢量。也就是说,在本实施例的三维超声图像数据中用以确定流体速度矢量的计算位置可以通过计算获得。那么在下述步骤S500中,标记的可以是各个时刻对应的三维超声图像数据中计算获得的位置处的流体速度矢量信息。
如图11(b)所示,根据上述步骤S200中获得的体超声回波信号,可以分别获得t1、t2、......、tn时刻对应的三维超声图像数据P11、P12、......、P1n中,然后,参照上述实施例中根据用户自主选择目标点的部分或全部、或者由系统默认目标点的分布密度等,确定目标点的初始位置,如图11(b)中位置为(X1、Y1、Z1)的第一点,然后计算初始位置在时刻t1的三维超声图像数据P11中的流体速度矢量(如P11中的箭头标识)。其次,计算目标点(即图中黑色圆点)从时刻t1的三维超声图像数据P11上的初始位置移动到时刻t2的三维超声图像数据P12上的位置(X2、Y2、Z2),然后根据体超声回波信号,获得三维超声图像数据P12中位置(X2、Y2、Z2)处的流体速度矢量,用以标记到三维超声图像数据中。比如,沿时刻t1的三维超声图像数据P11中(X1、Y1、Z1)位置上的流体速度矢量的方向,移动一时间间隔(其中,时刻t2-时刻t1=时间间隔),计算达到第二时刻t2时的位移,如此在第一个时刻t1上的一个目标点在第二个时刻三维超声图像数据上的第二显示位置就找到了,然后再依据上步骤S200中获得的体超声回波信号获得此第二显示位置上的流体速度矢量,从而得到目标点在时刻t2的三维超声图像数据P12中流体速度矢量信息。依次类推,每相邻的两个时刻,沿目标点在第一时刻对应的流体速度矢量的方向,移动相邻两个时刻的时间间隔获得位移量,根据位移量确定目标点在第二时刻的三维超声图像数据上的对应位置,再根据体超声回波信号获得目标点从第一时刻移动到第二时刻的超声图像中相应位置处的流体速度矢量,依此方式可以获得目标点从三维超声图像数据中(X1、Y1、Z1)处连续移动到(Xn、Yn、Zn)处的血流流体速度矢量信息,从而获得目标点从初始位置连续移动到不同时刻的三维超声图像数据中相应位置处的流体速度矢量,用以获取目标点的流体速度矢量信息、并标记到三维超声图像数据中进行叠加显示。
本实施例的显示方式中,计算出目标点在一时间间隔的移动位移、并依据该位移确定三维超声图像数据中目标点的相应位置,从初始选择的目标点开始按照该时间间隔移动,这一时间间隔可以由系统发射频率决定,还可以
是由显示帧率决定,或者还可以是用户输入的时间间隔,通过按照用户输入的时间间隔计算目标点移动后达到的位置,然后在获得该位置处的流体速度矢量信息用以对比显示。初始时,可以利用人机交互设备来选择N个初始目标点或者根据系统默认的分布位置或分布密度来设定N个初始目标点,每个初始目标点上都可以通过设定的流体速度矢量标识来表示这个点流速的大小和方向,如图11(b)所示。在步骤S500中,标记目标点在三维超声图像数据中连续移动到相应位置处时对应获得的流体速度矢量,可以形成随时间变化呈流动状的速度矢量标识,而流体速度矢量标识可以采用任意形状标示。通过标记图11(b)方式计算获得的流体速度矢量信息,那么随时间的变化,在输出显示时可以使速度矢量标识呈现随时间变化的流动状视觉效果,每个目标点的箭头都会发生位置改变,这样可以用类似立体箭头等速度矢量标识的移动,形成类似的可视化流体流动过程,以便用户能观察到近似真实的流体流动显像效果,例如显示血流在血管中的流动过程,本文中将这种显示模式称为第二模式,下文同。
基于用户自主选择、或者系统默认的目标点的部分或全部,根据上述步骤S100中体超声波束发射形式的不同,在上述各个实施例中,可以采用如下多种方式来根据体超声回波信号,获得扫描目标内目标点在任意时刻三维超声图像数据中相应位置处的流体速度矢量。
第一种方式,根据步骤S100中沿一个超声波传播方向发射体超声波束获得的一组体超声回波信号,计算扫描目标内目标点的血流流体速度矢量信息。此过程中,可以通过计算目标点在预设时间间隔内的移动位移和移动方向来获得该目标点在体图像数据中相应位置处的流体速度矢量。
如前文所述,本实施例中可以采用体平面超声回波信号来计算目标点的流体速度矢量信息,则在本发明的一些实施例中,基于一组体平面超声回波信号,计算扫描目标内目标点在预设时间间隔内的移动位移和移动方向。
本实施例中计算目标点在体图像数据中相应位置处的流体速度矢量的方法,可以使用类似斑点追踪的方法,或者还可以使用多普勒超声成像方法获
得目标点在一超声波传播方向上的流体速度矢量,或者也可以基于目标点处的时间梯度和空间梯度获得目标点的速度分矢量,等等。
例如,如图12(c)所示,在本发明的其中一些实施例中,根据体超声回波信号、获得扫描目标内目标点在三维超声图像数据中相应位置处的流体速度矢量的过程可以包括下列步骤。
首先,可以根据前述获得的体超声回波信号获得至少两帧三维超声图像数据,例如获得至少第一帧三维超声图像数据和第二帧三维超声图像数据。
如前文所述,本实施例中可以采用体平面超声波束来获取计算目标点的流体速度矢量的图像数据。平面超声波束大体上在整个成像区域中传播,因此,采用2D面阵探头发射一组相同角度的体平面超声波束,接收后做3D波束合成成像,即可获得一帧三维超声图像数据,若帧率为10000,就是每秒发射10000次,经过一秒钟就可以得到10000幅三维超声图像数据。本文中,将对体平面超声波束对应获得的体平面波束回波信号进行相应的处理而获得的扫描目标的三维超声图像数据称之为“体平面波束回波图像数据”。
然后,在第一帧三维超声图像数据中选择跟踪立体区域,该跟踪立体区域可以包含希望获得其速度矢量的目标点。例如,跟踪立体区域可以选择以目标点为中心的任意形状的立体区域,比如立方体区域,图12(c)中的小立方体区域。
其次,在第二帧三维超声图像数据中搜索与该跟踪立体区域对应的立体区域,例如,搜索与前述的跟踪立体区域具有最大相似性的立体区域作为跟踪结果区域。这里,相似性的度量可以使用本领域内通常使用的度量方法。例如有关相似性的度量可以采用如下公式相关性的三维计算模型:
或者
其中,X1为第一帧三维超声图像数据,X2为第二帧三维超声图像数据。i,j和k是图像的三维坐标。表示当它右边的式子计算结果达到最小时,A,B和C的值。A,B和C则代表新的位置。M,N和L为跟踪立体区域的大小。和是第一帧三维超声图像数据和第二帧三维超声图像数据中跟踪立体区域和跟踪结构区域(即图12(c)中的小立方体区域,箭头表示随时间变化的相同立方体区域的移动方向)中的平均值。
最后,根据前述的跟踪立体区域和前述的跟踪结果区域的位置以及第一帧三维超声图像数据与第二帧三维超声图像数据之间的时间间隔,即可获得所述目标点的速度矢量。例如,流体速度矢量的速度大小可以通过跟踪立体区域和跟踪结果区域之间的距离(即目标点在预设时间间隔内的移动位移)、除以第一帧体平面波束回波图像数据与第二帧体平面波束回波图像数据之间的时间间隔获得,而流体速度矢量的速度方向可以为从跟踪立体区域到跟踪结果区域的连线的方向(即图12(c)中的箭头方向),即目标点在预设时间间隔内的移动方向。
为了提升上述斑点跟踪法计算流体速度矢量时的精确度,则对所得每帧三维超声图像数据进行壁滤波,就是对于三维超声图像数据上每个空间位置点沿时间方向分别做壁滤波。三维超声图像数据上的组织信号随时间变化较小,而如血流信号等流体信号由于流动则变化较大。因此可以采用高通滤波器作为如血流信号等流体信号的壁滤波器。经过壁滤波之后,频率较大的流体信号保留下来,而频率较小的组织信号将被滤去。经过壁滤波后的信号,流体信号的信噪比可大大增强,有利于提升流体速度矢量的计算精度。此实施例中,对获取的三维超声图像数据进行壁滤波的过程,同样适用于其他实施例中。
又例如,在本发明的另一些实施例中,基于目标点处的时间梯度和空间梯度获得目标点的速度矢量的方法包括:
首先,根据体超声回波信号获得至少两帧三维超声图像数据;或者还可
以对三维超声图像数据进行壁滤波后再进行以下步骤。
然后,根据三维超声图像数据获得在目标点处沿时间方向的梯度,根据三维超声图像数据获得在目标点处沿超声波传播方向的第一速度分量;
其次,根据所述梯度和所述第一速度分量,分别获得在目标点处沿第一方向的第二速度分量和沿第二方向上的第三速度分量,所述第一方向、第二方向与超声波传播方向两两相互垂直;
最后,根据第一速度分量、第二速度分量和第三速度分量合成获得目标点的流体速度矢量。
本实施例中的第一方向和第二方向以及超声波传播方向两两相互垂直,可以理解为以超声波传播方向为一坐标轴构建三维坐标系,例如超声波传播方向是Z轴,其余第一方向和第二方向分别为X轴和Y轴。
首先,假设经过壁滤波之后的三维超声图像数据表示为P(x(t),y(t),z(t)),对P沿时间方向求导,根据链式法则得到下述公式(1):
然后,采用最小二乘解法求解,公式(2)可变形为下述线性回归方程公式(3):
其中, 中的下脚标i代表第i次三维超声图像数据分别沿X,Y和Z方向求梯度的计算结果。基于多次计算的每个空间点上分别沿三维坐标轴方向的梯度,形成参数矩阵A。设共有N次计算,并且由于这N次计算所占据的时间很短,因此假设在这段时间内的流体速度保持不变。εi表示随机误差。在这里,公式(3)满足高斯-马尔可夫定理,它的解为下述公式(4)。
根据高斯-马尔可夫定理,随机误差εi的方差可以表示为下述公式(5)
其次,基于上述梯度的关系模型,根据多普勒超声测量法获得每个空间点处沿超声波传播方向(即Z方向)上的不同时间的速度值vz及其平均值,计算每个空间点处沿超声波传播方向上的随机误差的方差和参数矩阵。VD为多普勒超声法测量的一组不同时间上的速度值,公式(6)中的vz为多普勒超声法得到的平均值,
如此基于公式(3)的随机误差εj的方差表示为下述公式(7)。
根据公式(5)和(7)计算出的两个不同的方差,利用每个空间点处沿超声波传播方向上的随机误差的方差和参数矩阵作为已知信息,利用加权最小二乘法求解上述公式(3)的解,如下述公式(8)所示。
最后,求解得到两两垂直的三个速度vx,vy和vz后,通过三维空间拟合得到向量血流速度的大小和方向。
还例如,在本发明的另一些实施例中,可以使用多普勒超声成像方法获得目标点的流体速度矢量,具体方法如下所示。
在多普勒超声成像方法中,针对扫描目标在同一超声波传播方向连续发射多次超声波束;接收发射的多次体超声波束的回波,获得多次体超声回波信号,每一次体超声回波信号中每个值对应了在一个超声波传播方向上进行扫描时一个目标点上的值;在步骤S400中包括:
首先,将所述多次体超声回波信号分别沿超声波传播方向做Hilbert变换
或者对回波信号做IQ解调,经过波束合成后,得到采用复数表示每个目标点上值的多组三维超声图像数据;N次发射接收后,在每一个目标点位置上就有沿时间变化的N个复数值,然后,按照下述两个公式计算目标点z在超声波传播方向的速度大小:
公式(10)
其中,Vz是计算出来的沿超声波传播方向的速度值,c是声速,f0是探头的中心频率,Tprf是两次发射之间的时间间隔,N为发射的次数,x(i)是第i次发射上的实部,y(i)是第i次发射上的虚部,为取虚部算子,为取实部算子。以上公式为一个固定位置上的流速计算公式。
其次,以此类推,每个目标点上流体速度矢量的大小通过这N个复数值都可以求出。
最后,流体速度矢量的方向为超声波传播方向,即上述多次体超声回波信号对应的超声波传播方向。
通常,在超声成像中,利用多普勒原理,对体超声回波信号进行多普勒处理,可以获得扫描目标或者其内的运动部分的运动速度。例如,获得了体超声回波信号之后,通过自相关估计方法或者互相关估计方法,可以根据体超声回波信号获得扫描目标或者其内的运动部分的运动速度。对体超声回波信号进行多普勒处理以获得扫描目标或者其内的运动部分的运动速度的方法可以使用本领域中目前正在使用或者将来可能使用的任何可以用以通过体超声回波信号计算扫描目标或者其内的运动部分的运动上速度的方法,在此不再详述。
当然针对一个超声波传播方向对应的体超声回波信号,本发明不限于上述两种方法,还可以采用其他本领域中已知或者未来可能采用的方法。
另外,针对三维超声图像数据中目标点的血流速度矢量在计算时,一个计算点到达的新的位置很可能不是目标点待计算的位置,则可以采用插值法来获得,例如8点插值法。如图12(d)所示,假设立体区域内部中间的灰色点为需要计算的点,8个黑色的点是每帧计算的速度的位置。首先通过空间连线得到每一个黑色点(黑色点表征立体区域的各个顶点位置)与灰色点的距离,然后根据这个距离得到一个权重列表。每个黑色点上的速度分为Vx,Vy和Vz,三个方向两两垂直。根据8个黑点的三个方向上速度,分别依据权重值计算得到红色点上的三个方向的速度值。这样就可以得到红色点的速度大小和方向。上述8点插值法是基于立体区域为立方体结构,当然,也可以基于其他形状的立方体区域来进行插值计算,例如正四面体、正八面体等等。通过划定目标点领域空间的立体区域结构来设定相应的插值计算法,从而根据计算点到达的新的位置上的流体速度矢量,计算目标点在待计算位置上流体速度矢量。
第二种方式,根据步骤S100中沿多个超声波传播方向发射的体超声波束,用以形成多个上述扫描体时,接收来自多个扫描体上体超声波束的回波,获得多组体超声回波信号,根据此多组体超声回波信号计算扫描目标内目标点的流体速度矢量信息。此过程中,首先基于多组体超声回波信号中的其中一组体超声回波信号,计算扫描目标内目标点在三维超声图像数据中相应位置处的一个速度分矢量,依据所述多组体超声回波信号获取该相应位置处的多个速度分矢量;然后,根据多个速度分矢量,合成获得目标点在三维超声图像数据中相应位置处的流体速度矢量。
如前文所述,本实施例中可以采用体平面超声回波信号来计算目标点的流体速度矢量,则在本发明的一些实施例中,基于多组体平面超声回波信号中的一组体平面超声回波信号,计算扫描目标内目标点在一个位置上的一个速度分矢量,依据多组体平面超声回波信号获取该位置上的多个速度分矢量。
本实施例中,基于多组体超声回波信号中的其中一组体超声回波信号、计算扫描目标内目标点的一个速度分矢量的过程,可以参照上述第一种方式中更提供的多种计算方式之一。例如,根据一组体超声回波信号,通过计算目标点在预设时间间隔内的移动位移和移动方向来获得该目标点在相应位置上的速度分矢量。本实施例中计算目标点的速度分矢量的方法,可以使用前文所述的类似斑点追踪的方法,或者还可以使用多普勒超声成像方法获得目标点在一超声波传播方向上的速度分矢量,或者也可以基于目标点处的时间梯度和空间梯度获得目标点的血流速度分矢量,等等。具体参见前文中有关第一种方式的详细解释,在此不再累述。
当步骤S100中存在有两个角度的情况下,经过2N次发射可以得到一个时刻上所有要测位置流体速度的大小和方向;若存在三个角度则需要3N次发射,以此类推。图12(a)显示了两次不同角度发射A1和B1,经过2N次发射后,图中原点位置上的速度和大小可通过速度拟合计算出来。速度拟合见图12(b)所示。图12(b)中VA和VB分别为目标点在相应位置处、分别沿图12(a)中两个超声波传播方向A1和B1的速度分矢量,通过空间速度合成后获得目标点在相应位置处的流体速度矢量V。在有两个超声波传播方向的情况下,可重复利用每次发射得到的图像数据、使用多普勒成像方法计算速度分矢量,从而减少两次得到整场流体速度大小和方向的时间间隔,两个超声波传播方向的最小时间间隔为2次发射所用的时间,三个超声波传播方向的最小时间间隔为3次发射所用的时间,以此类推。使用以上所述方法,在每一个时刻上,都可以得到整场所有位置的流速大小和方向。
当步骤S100中至少存在三个超声波传播方向时,用于计算至少三个速度分矢量的至少三组波束回波信号,所对应的至少三个超声波传播方向不在同一平面内,能使得计算获得的流体速度矢量更加贴近真实三维空间中的速度矢量,下文简称有关超声波传播方向的约束条件。
例如,在上述步骤S100中,可沿N(3≤N)个超声波传播方向向扫描目标发射体超声波束,但在步骤S400中,用于计算上述目标点在相应位置上的
流体速度矢量时,每次采用n个速度分矢量进行计算,此处的3≤n<N。也就是说,在上述步骤100中可以是:沿至少三个超声波传播方向向扫描目标发射体超声波束,其中相邻的至少三个超声波传播方向不在同一平面内。那么,在步骤S400中,依据基于至少三组体超声回波信号中的一组体超声回波信号、计算扫描目标内目标点的一个速度分矢量的过程,分别计算目标点在相应位置时、在连续接收的至少三组体超声回波信号中所对应的至少三个血流速度分矢量,根据此至少三个超声波传播方向上的速度分矢量,合成获得目标点在相应位置上的流体速度矢量。
又如,为了缩减运算量、降低扫描和运算的复杂度,在上述步骤S100中,也可沿N(3≤N)个超声波传播方向向扫描目标发射体超声波束,但在步骤S400中,用于计算上述目标点在相应位置处的流体速度矢量时,每次采用N个速度分矢量进行计算。也就是说,在上述步骤100中可以是:沿至少三个超声波传播方向向扫描目标发射体超声波束,其中此至少三个超声波传播方向不在同一平面内。那么,在步骤S400中,依据基于接收获得的至少三组体超声回波信号中的一组体超声回波信号、计算扫描目标内目标点在相应位置处的一个速度分矢量的过程,分别计算目标点在相应位置时、在所述至少三组体超声回波信号中所对应的沿所有超声波传播方向上的各个速度分矢量,根据此所有超声波传播方向上的速度分矢量,合成获得目标点在相应位置处的流体速度矢量。
为了能满足上述有关超声波传播方向的约束条件,无论是按照上述“相邻的至少三个超声波传播方向不在同一平面内”或者“此至少三个超声波传播方向不在同一平面内”的实现方式,均可通过调整参与超声波束发射的发射阵元的时延时间、和/或驱动参与超声波束发射的发射阵元实现偏转使超声波出射方向发生改变,来获得不同的超声波传播方向。这里提到的驱动参与超声波束发射的发射阵元实现偏转使超声波出射方向发生改变,如为排列成阵列形式的探头组合结构中的每个线阵探头或每个发射阵元都配置相应的驱动控制,来统一调整驱动探头组合结构中各个探头或发射阵元的偏转角度或延时,
使得探头组合结构输出的体超声波束所形成的扫描体具有不同偏移量,从而获得不同的超声波传播方向。
在本发明的其中一些实施例中,可通过在显示界面上配置用户自主选择项、或者提供选项配置按键等,用于获取用户选择的超声波传播方向的个数、或者选择上述步骤S400中用于合成上述流体速度矢量的速度分矢量的个数,生成指令信息;根据此指令信息,调整上述步骤S100中的超声波传播方向个数,并依据该超声波传播方向个数确定上述步骤S400中用于合成流体速度矢量的速度分矢量的个数,或者调整上述步骤S400中用于合成目标点在相应位置处的流体速度矢量的速度分矢量的个数,以提供用户更加舒适的体验效果、以及更加灵活的信息提取接口。
在步骤S500中,3D图像处理模块11实现在三维超声图像数据中标记目标点的流体速度矢量信息形成流体速度矢量标识,获得包含流体速度矢量标识的体图像数据900。本文中对三维超声图像数据可以是实时采集的,也可以是非实时采集的,如果是非实时采集的,则可以实现对三维超声图像数据的回放、暂定等处理。此外,在步骤S310中通过灰阶血流成像技术,获得扫描目标的至少一部分的增强型三维超声图像数据时,那么对应的通过灰阶血流成像技术获得的灰度特征或流体速度信息也可以用于在显示屏显示装置上显示的图像中给予展示。在执行步骤S500时,3D图像处理模块11可以分割增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得云朵状的团簇体区域块,在三维超声图像数据中标记云朵状的团簇体区域块,获得包含云朵状团簇体的体图像数据。或者,在对于图8(b)所示的对团簇体的动态显示方法中,在步骤S310之后采用步骤S510,即分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得云朵状的团簇体区域块,在所述三维超声图像数据中标记所述云朵状的团簇体区域块形团簇体成,获得包含团簇体的体图像数据,步骤S510的具体实现方法可参见步骤S500的相关说明。
在本发明的其中一些实施例中,在标记目标点的流体速度矢量信息和/或
云朵状的团簇体区域块之前,可以先将三维超声图像数据转化为透视效果的体图像数据,便于后续视差图像的转化。
将三维超声图像数据转化为透视效果的体图像数据可以有以下两种方式:
其一,对三维超声图像数据分层次设置不同透明度。通过不同透明度的设置可以从某个角度(后续视差图像转化时的观察视角)看过去,能获得扫描目标(如图13和图14中的血管930)内部的信息展示,主要是为了展示血管930中目标点的流体速度矢量信息,比如,图13和图14中标记目标点的流体速度矢量信息形成流体速度矢量标识920。
如图13(a),对三维超声图像数据作平行切面(710、711、712),将每个切面设置为不同的透明度,或者多个切面依次设置阶梯式递变的透明度。图13(a)通过不同的剖面线填充来表征不同的透明度。平行切面(710、711、712)的透明度可以不同,或者,可以依次呈阶梯变化。或者,对于多个切面依次设置阶梯式递变的透明度还可以是,将目标位置(即核心观察位置)处的切面设置为低透明度、然后按照多个切面的位置关系将以该切面的透明度为参考,将该切面两侧的多个切面对应的透明度设置成阶梯式增加、或者将该切面两侧的多个切面对应的透明度设置为相对较高的透明度,从而能够通过透明度设置弱化背景图像,而使目标位置(即核心观察位置)处的切面信息显示的更加突出,当然本实施例中的目标位置(即核心观察位置)处的切面可以是一个切面,也可以是指临近的多个切面。比如,图13(a),平行切面711的透明度为20%,那么平行切面710和712可以分别为50%。
具体地,可以依据两路视差图像数据的观察视角来对三维超声图像数据分层次设置不同透明度。这里的观察视角可以是将所述体图像数据转化为两路视差图像数据的步骤中,任意视差数对应的视点位置,或者还可以是,对播放中的体图像数据进行拍摄的两个观察视角。
还如图13(b)所示,对三维超声图像数据以观测点为中心做同心球切面(721、722),将每个切面设置为不同的透明度,或者多个切面依次设置阶梯
式递变的透明度。本实施例中的观测点可以用户自行选择,比如,可以以三维超声图像数据存在的空间中心点作为观测点。
当然上述图13(a)和图13(b)多个平行切面或同心球切面的步长可以根据需要来设定,要能够逐一层次展现扫描目标的内部信息即可。通常在转化为透视效果时,透明度的设置也要考虑到视差图像转化时观察视角,所以,上述对三维超声图像数据分层次设置不同透明度时,可以考虑从观察视角的角度分层设置不同透明度,以求展现扫描目标的内部信息。
如图14,对三维超声图像数据进行组织结构分割,将分割获得的组织结构区域设置不同的透明度。930为一段血管图像,其包括第一层血管壁组织结构931和第二层血管壁组织结构932,其中通过不同的透明度来区分这两层血管壁组织,图14中通过不同的剖面线来展示不同的组织结构区域。
图13和图14中给出的两种方案也还可以相互结合使用。图13和图14中展示了透视效果的体图像数据900。
其二,基于三维绘图软件将每帧三维超声图像数据转换成一副三维透视效果图。这里的三维绘图软件可以包括3ds max的软件、或者其他可展现立体效果图的软件工具,或者自制的类似的3D画图软件工具。本实施例中的三维透视效果图的透视渲染方式也可以参照前面所述,例如,按照组织结构分割后的结果对不同的组织结构分别设置透视效果。
可以针对每帧三维超声图像数据分别进行透视效果的体图像数据的转化后,依次对每帧图像按照上述第一模式或第二模式对目标点的流体速度矢量信息进行标记。例如,基于上述第二模式,将三维超声图像数据转化为透视效果的体图像数据,并在体图像数据中标记目标点随时间变化的流体速度矢量信息,形成可随时间变化的所述流体速度矢量标识;和/或,在体图像数据中标记随时间变化的云朵状的团簇体区域块。
基于上述实施例,有关体图像数据的生成具体可以为,
例如,对每帧三维超声图像数据分层次设置不同透明度,并在每帧三维超声图像数据中标记目标点在相应位置处的流体速度矢量信息,获得包含流
体速度矢量标识的单帧体图像,随时间连续的多帧体图像构成体图像数据,从而显示体图像数据时使流体速度矢量标识呈现随时间变化的流动状视觉效果,也就是,通过人眼观察时可在3D超声图像中观察到随时间变化的流动状流体速度矢量标识。和/或,对每帧三维超声图像数据分层次设置不同透明度,并在每帧三维超声图像数据中标记云朵状的团簇体区域块形成团簇体,获得包含云朵状团簇体的单帧体图像,随时间连续的多帧体图像构成上述体图像数据,从而显示体图像数据时使团簇体呈现随时间变化的翻滚状视觉效果,也就是,通过人眼观察时可在3D超声图像中观察到随时间变化呈翻滚状的团簇体。
还例如,基于三维绘图软件将每帧三维超声图像数据转换成一副三维透视效果图,并在每副三维效果图中标记目标点相应位置处的流体速度矢量信息,获得包含流体速度矢量标识的单帧体图像,随时间连续的多帧体图像构成体图像数据,从而显示体图像数据时使流体速度矢量标识呈现随时间变化的流动状视觉效果。和/或,基于三维绘图软件将每帧三维超声图像数据转换成一副三维透视效果图,并在每副三维效果图中标记云朵状的团簇体区域块,获得包含云朵状团簇体的单帧体图像,随时间连续的多帧体图像构成上述体图像数据,从而显示体图像数据时使团簇体呈现随时间变化的翻滚状视觉效果。
或者还可以是,基于真三维立体图像显示技术将三维超声图像数据显示为动态的空间立体图像,并在空间立体图像中标记目标点随时间变化的流体速度矢量信息,获得体图像数据,从而显示体图像数据时使流体速度矢量标识呈现随时间变化的流动状视觉效果。和/或,基于真三维立体图像显示技术将上述三维超声图像数据显示为动态的空间立体图像,并在上述空间立体图像中标记随时间变化的云朵状的团簇体区域块,获得上述体图像数据,从而显示体图像数据时使团簇体呈现随时间变化的翻滚状视觉效果。
真三维立体图像显示技术是指基于全息显示技术或基于体三维显示技术,在一定实体空间范围内显示三维超声图像数据,形成扫描目标的真实空
间立体图像的一种技术。
本文的全息显示技术,主要包括传统全息图(透射式全息显示图像、反射式全息显示图像、像面式全息显示图像、彩虹式全息显示图像、合成式全息显示图像等)和计算机全息图(CGH,Computer Generated Hologram)。计算机全息图漂浮于空中并具有较广的色域,在计算机全息图中,用来产生全息图的物体需要在计算机中生成一个数学模型描述,且光波的物理干涉也被计算步骤所代替,在每一步中,CGH模型中的强度图形可以被确定,该图形可以输出到一个可重新配置的设备中,该设备对光波信息进行重新调制并重构输出。通俗的讲,CGH就是通过计算机的运算来获得一个计算机图形(虚物)的干涉图样,替代传统全息图物体光波记录的干涉过程;而全息图重构的衍射过程并没有原理上的改变,只是增加了对光波信息可重新配置的设备,从而实现不同的计算机静态、动态图形的全息显示。
基于全息显示技术,在本发明的其中一些实施例中,如图15所示,空间立体显示装置8包括:360全息幻影成像系统,该系统包括光源820、控制器830、分光镜810,光源820可以采用射灯,控制器830包括一个或多个处理器,通过通信接口接收来自数据处理模块9(或其中的图像处理模块7)输出的三维超声图像数据,并经过处理后获得计算机图形(虚物)的干涉图样,输出该干涉图像至分光镜810,并通过光源810投射在分光镜810上的光呈现此干涉图样,形成扫描目标的空间立体图像。这里的分光镜810可以是特殊的镜片、或者是四面棱锥体等等。
除上述360全息幻影成像系统之外,空间立体显示装置8还可以基于全息投影设备,例如,通过在空气、特殊镜片、雾屏等上形成立体影像。因此,空间立体显示装置8还可以为空气全息投影设备、激光束全息投影设备、具有360度全息显示屏的全息投影设备(其原理是将图像投影在高速旋转的镜子上,从而实现全息影像。)、以及雾幕立体成像系统等设备中之一。
空气全息投影设备是通过将上述实施例中获得的计算机图形(虚物)的干涉图样投影在气流墙上形成空间立体图像,由于组成水蒸气的水分子震动
不均衡,可以形成立体感很强的全息图像。于是,本实施例在图15所示的实施例的基础上增加用以形成气流墙的设备。
激光束全息投影设备是一种利用激光束来投射实体的全息影像投射系统,通过将上述实施例中获得的计算机图形(虚物)的干涉图样通过激光束来投射获得空间立体图像。本实施例中,主要利用了氧气和氮气在空气中散开时,两者混合成的气体变成灼热的物质,并在空气中通过不断的小爆炸形成全息图像。
雾幕立体成像系统,在图15所示的实施例的基础上还包括雾化设备,用以形成水雾墙,利用水雾墙作为投影屏,将上述实施例中获得的计算机图形(虚物)的干涉图样通过镭射光在水雾墙形成全息图像,从而获得空间立体图像。雾屏成像,通过镭射光借助空气中的微粒,在空气中成像,使用雾化设备产生人工喷雾墙,利用这层水雾墙代替传统的投影屏,结合空气动力学制造出能产生平面雾气的屏幕,再将投影仪投射喷雾墙上形成全息图像。
上述内容仅仅介绍了几种全息显示技术的设备,具体可以参加目前市场上已有的相关设备结构,当然,本发明不限于上述几种基于全息显示技术的设备或系统,还可以采用未来可能存在的全息显示设备或技术。
然而,对于体三维显示技术,其是指利用人自身特殊的视觉机理,制造了一个由体素微粒代替分子微粒组成的显示实物,除了可以看到光波体现的形状外,还能触摸到体素的真实存在。它通过适当方式来激励位于透明显示体积内的物质,利用可见辐射的产生吸收或散射而形成体素,当体积内许多方位的物质都被激励之后,便能形成由许多分散的体素在三维空间内构成三维空间图像。目前包括以下两种。
(1)、旋转体扫描技术,旋转体扫描技术主要用于动态物体的显示。在该技术中,一串二维图像被投影到一个旋转或移动的屏幕上,同时该屏幕以观察者无法觉察的速度在运动,因为人的视觉暂留从而在人眼中形成三维物体。因此,使用这种立体显示技术的显示系统可实现图像的真三维显示(360°可视)。系统中不同颜色的光束通过光偏转器投影到显示介质上,从而使得介
质体现出丰富的色彩。同时,这种显示介质能让光束产生离散的可见光点,这些点就是体素,对应于三维图像中的任一点。一组组体素用来建立图像,观察者可从任意视点观察到这个真三维图像。基于旋转体扫描技术的显示设备中的成像空间可以由屏幕的旋转或平移产生。在屏幕扫过成像空间时在发射面上激活体素。该系统包括:激光系统、计算机控制系统、旋转显示系统等子系统。
基于体三维显示技术,在本发明的其中一些实施例中,如图16所示,空间立体显示装置8包括:体素实体部811、旋转马达812、处理器813、光学扫描器819、激光器814,体素实体部811可以是可用于容置一旋转面的旋转结构体,此旋转面可以是螺旋面,体素实体部811具有可通过激光投影进行显示的介质。处理器813控制旋转马达812带动体素实体部811内的一个旋转面高速旋转,然后处理器813控制激光器产生R/G/B三束激光,并将会聚成一束色度光线通过光学扫描器819投射到体素实体部811内的旋转面上,产生多个彩色亮点,当旋转速度够快时,则在体素实体部811内生成多个体像素,汇聚多个体像素可形成悬浮的空间立体图像。
在本发明的另一些实施例中,还可以是,在图16所示的结构框架中,旋转面可以是一个位于体素实体部811内的直立投影屏,这个屏的旋转频率可高达730rpm,它由很薄的半透明塑料做成。当需要显示一个3D物体影像时,处理器813将首先通过软件生成三维超声图像数据分割成多张剖面图(沿Z轴旋转,平均每旋转X度(例如2度)不到截取一张垂直于X-Y平面的纵向剖面,直立投影屏平均每旋转X度不到,便换一张剖面图投影在直立投影屏上,当直立投影屏高速旋转、多个剖面图被轮流高速投影到直立投影屏上时,从而形成一个可以全方位观察的自然的3D图像。
如图17所示,空间立体显示装置8包括:具有直立投影屏816的体素实体部811、旋转马达812、处理器813、激光器814以及发光阵列817,在发光阵列817上设置有多个光束出口815,发光阵列817可以采用三块基于微机电系统(MEMS)的DLP光学芯片,每块芯片上均布设了由百万个以上数字
化微镜像器件(Digital Micro-Mirror)组成的高速发光阵列,这三块DLP芯片分别负责R/G/B三色图像,并被合成为一幅图像。处理器813控制旋转马达812带动直立投影屏816高速旋转,然后处理器813控制激光器产生R/G/B三束激光,并将三束激光输入至发光阵列817,通过发光阵列817将合成光束投射在高速旋转的直立投影屏816上(其中还可以借助中继光学镜片的反射将光束投射在直立投影屏816上),产生多个显示用体像素,汇聚多个体像素可形成悬浮于体素实体部811内的空间立体图像。
(2)、静态体成像技术,是基于频率上转换技术形成三维立体图像的,所谓频率上转换三维立体显示是利用成像空间介质吸收多个光子后会自发辐射出一种荧光,从而产生可见的像素点。其基本原理是利用两束相互垂直的红外激光交叉作用于上转换材料上,经过上转换材料的两次共振吸收,发光中心电子被激发到高激发能级,再向下能级跃迁就可能产生可见光的发射,这样的上转换材料空间中的一个点就是一个发光的亮点,如果使两束激光的交叉点依照某种轨迹在上转换材料中做三维空间的寻址扫描,那么两束激光的交叉点所扫描过的地方应当是一条可以发射可见荧光的亮带,即可以显示出同激光交叉点运动轨迹相同的三维立体图形。这种显示方法肉眼就可以看到360°全方位可视的三维立体图像。静态体成像技术在上述各个实施例中的体素实体部811内设置显示介质,该介质由多个间隔设置的液晶屏层叠组成(例如,每一个屏的分辨率为1024×748,屏与屏之间间隔约为5mm);这些特制液晶屏的液晶象素具有特殊的电控光学属性,当对其加电压时,该液晶象素将像百叶窗的叶面一样变得平行于光束传播方式,从而令照射该点的光束透明地穿过,而当对其电压为0时,该液晶象素将变成不透明的,从而对照射光束进行漫反射,形成一个存在于液晶屏层叠体中的体素,此时就可以取消图16和图17中的旋转马达。具体地,还可通过三维深度反锯齿(3D Depth Anti-Aliasing)的显示技术来扩大这多个间隔设置的液晶屏所能表现的纵深感,令1024×748×20的物理体空间分辨率实现高达1024×748×608的显示分辨率;和图17所示的实施例一样,本实施例也可采用了DLP成像技
术。
同理,上述内容仅仅介绍了几种体三维显示技术的设备,具体可以参加目前市场上已有的相关设备结构,当然,本发明不限于上述几种基于体三维显示技术的设备或系统,还可以采用未来可能存在的体三维显示技术。
在本发明的实施例中,可以在一定空间或任意空间中显示扫描目标的空间立体图像,或者还可以基于空气、镜片、雾屏、旋转或静止的体素等显示媒介来呈现扫描目标的空间立体图像,然后在空间立体图像中标记目标点随时间变化的流体速度矢量信息,获得体图像数据。对于针对上述真实显示的空间立体图像的标记方式,可以基于体图像数据与空间立体图像的成像范围之间的图像映射关系,来根据目标点在体图像数据的位置转化获得其在空间立体图像中的位置,从而实现在空间立体图像中标记目标点随时间变化的流体速度矢量信息。
基于上述各种体图像数据的生成方式,可以采用以下方式在体图像数据中标记目标点的血流速度矢量信息。
在本发明的一些实施例中,如果在通过上述体图像数据900上标记采用上述第一模式获得的目标点的流体速度矢量信息,则如图18所示,910表示一部分血管示意图,图中用带箭头的立方体标记目标点的流体速度矢量信息,其中箭头方向表示目标点此时的流体速度矢量的方向,箭头的长度可以用以表示目标点此时的流体速度矢量的大小。图18中,实线所示的箭头922表示当前时刻目标点的流体速度矢量信息,而虚线所示的箭头921表示前一时刻目标点的流体速度矢量信息。图18中为展现的体图像数据的核心组织结构的图像效果,离观测点近的位置物体大,而离观测点远的位置物体小。
此外,在本发明的另一些实施例中,在通过上述体图像数据上标记采用上述第二模式获得的目标点的流体速度矢量信息,即目标点的流体速度矢量信息包括:目标点连续移动到三维超声图像数据中相应位置处而依次对应获得的流体速度矢量;于是在步骤S500中,标记目标点连续移动到相应位置处时对应获得的流体速度矢量,形成随时间变化呈流动状的流体速度矢量标识。
如图19所示,为展现立体显示效果,离观测点近的位置物体大,而离观测点远的位置物体小。图19中采用带箭头的球体940标记目标点的流体速度矢量信息,其中箭头方向表示目标点此时的流体速度矢量的方向,箭头的长度可以用以表示目标点此时的流体速度矢量的大小。930为一段血管图像,图19中,实线所示的带箭头的球体941表示当前时刻目标点的流体速度矢量信息,而虚线所示的带箭头的球体942表示前一时刻目标点的流体速度矢量信息。通过上述第二模式来获得目标点的流体速度矢量信息,则在体图像数据中叠加随时间呈流动状的标记940。
如图19所示,930为一段血管图像,其包括第一层血管壁组织结构931和第二层血管壁组织结构932,其中通过不同的色彩来区分这两层血管壁组织。此外如图20所示,两组血管960和970中分别均用带箭头的球体973和962标记其中的目标点的血流速度矢量,此外,还有其他组织结构的立体图像区域971、972、961均标记为其他色彩,以示区分。在图20中通过区域内填充剖面线的类型不同表征该区域内色彩标记不同。因此,为体现立体成像效果,区分显示信息,在体图像数据中包括根据解剖学组织结构及层次关系用以呈现各个组织结构的立体图像区域,通过配置各个立体图像区域的色彩参数,来与相邻的立体图像区域区分显示。
还可以,为了能够在体图像数据中突出标记流体速度矢量信息,则对于各个组织结构的立体图像区域可以显示其轮廓线,以避免覆盖或混淆流体速度矢量标识。例如,如图18所示,对于一段血管910,可以显示其外部轮廓线,和/或某些断面轮廓线,用以表示流体速度矢量信息标识920所在的图像区域,从而更加突出显示流体速度矢量标识920,并且更加直观、清晰的展现流体速度矢量标识920。
如图21所示,当上述实施例中在步骤S300中通过灰阶血流成像技术,获得扫描目标的至少一部分的增强型三维超声图像数据时,那么对应的通过灰阶血流成像技术获得的灰度特征或速度信息也可以用于输出显示时展示在3D超声图像中。比如,无论是整体对增强型三维超声图像数据按照三维数据
体进行处理,还是将其看作是多幅二维图像来进行分别处理,可以通过以下方式在每一帧的增强型三维超声图像数据中获得相应的团簇体区域块。在执行步骤S500时,首先,分割一帧或多帧增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得云朵状的团簇体区域块;在三维超声图像数据中标记云朵状的团簇体区域块形成团簇体,获得包含团簇体的体图像数据,使得在3D超声图像中呈现随时间变化呈翻滚状的团簇体。图21(a)中,通过不同线型的950、951、952依次表示不同时刻的团簇体,随着时间的流逝,可以看出团簇体随时间变化呈翻滚状,生动的表现了流体的整体滚动情况,给观察者全方位的观察视角。此外,在本实施例中,分割感兴趣区域时可以基于图像灰度属性。图21(b)中,是在图21(a)上叠加了采用带箭头的球体940标记目标点的流体速度矢量信息的效果图。
此外,为了更加清晰的显示上述团簇体,还可以在云朵状的团簇体区域块上叠加色彩信息。例如,当血管壁采用红色系时,则其中表示血流的团簇体区域块则叠加白色或者橘红色等色彩信息,以是区分。或者,在分割增强型三维超声图像数据中用以表征流体区域的感兴趣区域获得云朵状的团簇体区域块的步骤中,基于图像灰度分割增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得不同灰度特征的团簇体区域块,对于立体空间区域的团簇体区域块,这里的灰度特征可以是整个区域块内空间点灰度值的均值、整个区域块内空间点灰度最大值或最小值等等,用以表征整个区域块的灰度特性的数值或一组属性值。在显示的体图像数据中显示云朵状的团簇体区域块的步骤中,通过不同的色彩渲染不同灰度特征团簇体区域块。比如,如果将分割获得的团簇体区域块按照灰度特征属性进行分类,分为0-20类,那么对应的每一类采用一种色来标记显示彩,或者0-20类分别采用同一色相下不同纯度的色彩来标记显示。
如图21(c)中,团簇体区域块953和954可以采用不同的颜色标记,以表示其因为速度反映的灰度特性。当然,如图21(c)中所示,对于同一云朵状的团簇体区域块953和954,也可以按照上述基于图像灰度的分割方式,
获得不同灰度的区域块,并按照团簇体区域块内不同区域体的灰度变化叠加不同的色彩进行渲染,图21(c)中通过不同的剖面线对团簇体区域块953和954中的不同区域体进行填充用以表征叠加了不同的色彩进行渲染。对于色彩的渲染方式也可以采用上述实施例,例如,将团簇体区域块内不同的区域体按照灰度特征属性进行分类,分为多个类别,那么对应的每一类采用一种色相(或色调)来标记显示彩,或者多个类别分别采用同一色相(或色调)下不同纯度的色彩来标记显示。
当然,还可以,按照团簇体区域块所表征的流体区域的速度信息,在该团簇体区域块上叠加相应设置的色彩信息。例如,如图21(c)中,团簇体区域块953和954中用不同的颜色标记,来表征其对应流体区域的速度信息。
基于上述可以显示云朵状的团簇体区域块的显示效果,本发明其实提供了另一种显示模式,如图21和图22,其中于是可以通过用户输入的模式切换命令,从当前显示模式下切换到,通过显示包含团簇体的体图像数据以使团簇体在输出显示时呈现随时间变化的翻滚状视觉效果而获得的显示模式下。
如图18至图22所示,执行在在所述三维超声图像数据中标记目标点的流体速度矢量信息的步骤S500时,通过配置流体速度矢量标识(920、940、973、962、981、982)的颜色、立体形状、透明度中的其中一种或者两种以上的参数组合,来与体图像数据中的背景图像部分(即体图像数据中其他组织结构的立体图像区域,如血管壁区域、肺部区域等)进行区分显示。例如,血管壁采用绿色,则其中的流体速度矢量标识采用红色,或者动脉的血管壁和流体速度矢量标识均采用红色系,而静脉的血管壁和流体速度矢量标识均采用绿色系。
同样的,还可以通过配置用以在体图像数据中标记流体速度矢量信息的流体速度矢量标识(920、940、973、962、981、982)的颜色、立体形状、透明度中的其中一种或者两种以上的参数组合来区分显示流体速度矢量信息的不同速率等级和方向。例如,动脉内流体速度矢量标识采用渐变的红色系
中的各阶段色彩来表示不同速率等级,而静脉的流体速度矢量标识采用渐变的绿色系中的各阶段色彩来表示不同速率等级。深红或深绿表示速度快,浅绿或浅红表示速度慢。有关色彩的配比方式可参见相关色彩学知识在此不再详细列举。
此外,针对上述各个实施例中,流体速度矢量标识包括带箭头或带方向指引部的立体标志物。例如图18中带箭头的立方体、图19中带箭头的球体,或者还可以是带箭头的棱柱体、圆锥体,通过圆锥体的尖部指向表征流体速度矢量的方向,或者还可以用圆台体的小头部作为方向指引部、或者还可以采用纵截面为菱形的立体标志物中的长对角边所在的方向表示流体速度矢量的方向,或者还可以采用椭圆体的长轴两端部作为方向指引部用以表征流体速度矢量的方向,等等,本发明不限于采用流体速度矢量标识的形状,本文可以采用任何一种带方向指引的立体标志物来标记目标点的流体速度矢量。因此,为了更加直观的了解目标点的流体速度矢量信息,则可以通过立体标志物的箭头或方向指引部表征流体速度矢量的方向,而通过立体标志物的体积大小表示流体速度矢量的大小。
又或者,流体速度矢量标识还可以采用不带箭头或带方向指引部的立体标志物,如球体、椭球体、立方体、长方体等等任意形状的立体结构。于是,为了更加直观的了解目标点的流体速度矢量信息,则可以通过立体标志物的旋转速度或体积大小来表示流体速度矢量的大小,而通过使立体标志物随时间移动来展示流体速度矢量的方向,例如可以采用上述第二模式的方式来计算目标点的流体速度矢量,从而获得随时间变化呈流动状的流体速度矢量标识。将立体标志物的旋转速度或体积大小与流体速度矢量的大小按照等级关联,便于在体图像数据或三维超声图像数据上实现标记。旋转的方向可以所有的立体标志物一致,也可以不相同,而旋转速度是人眼可以识别的速度,为了使人眼能观察到立体标志物的旋转,可以采用非对称的立体标志物,或带有标记的立体标志物。
还或者,可以采用立体标志物的旋转速度来表示流体速度矢量的大小,
而采用箭头指向来表征流体速度矢量的方向。因此,在本发明中不限于以上各种表示流体速度矢量的大小或方向的组合,本发明中可以通过用以标记目标点流体速度矢量的立体标志物的体积大小或旋转速度来表示流体速度矢量的大小,和/或,通过该立体标志物上的箭头指向、方向指引部的指向或使立体标志物随时间移动来表征流体速度矢量的方向。
在本发明的一些实施例中,在通过上述体图像数据上叠加采用上述第二模式获得的目标点的流体速度矢量信息,即目标点的流体速度矢量信息包括:目标点连续移动到三维超声图像数据中相应位置处而依次对应获得的流体速度矢量;于是在步骤S500中,还可以通过关联标志依次跨接同一目标点连续移动到三维超声图像数据中的多个相应位置(如两个以上相应位置),形成该目标点的运动行程轨迹,用以输出显示时显示运动行程轨迹。图22中,用于显示运动行程轨迹的关联标志包括细长柱体、分段式细长柱体或彗尾状标志等等。图22中为展现立体显示效果,离观测点近的位置物体大,而离观测点远的位置物体小。图22中930为一段血管图像,用于标记目标点的血流速度矢量信息的流体速度矢量标识(带箭头的球体981或球体982),从流体速度矢量标识的初始位置开始,依次通过细长柱体或分段式细长柱体991跨接同一目标点连续移动到体图像数据中的多个相应位置,形成运动形成轨迹,便于观察者能整体了解目标点的运动方式。此外,图22中还给出了另一种显示轨迹的方式,例如,从流体速度矢量标识的初始位置开始,通过在同一目标点连续移动到体图像数据中的多个相应位置的连续区域范围内叠加一定的色彩信息,形成彗尾状标志992,则当观察者观察该目标点的运动轨迹时,就一个流体速度矢量标识982之后拖着一个长长的尾巴,类似于彗星的尾巴。
为了便于在体图像数据中突出显示上述运动行程轨迹,在本发明的其中一些实施例中,上述方法还包括:
首先,获取用户输入的有关上述关联标志的标示信息,生成选择指令,该标示信息包括:关联标志的标志形状、或者连接线的标志形状及其色彩等信息;然后,按照所述选择指令中选择的标示信息,来配置在显示图像中显
示的运动行程轨迹的关联标志相关参数。
本文中的色彩包括通过改变色调(色相)、饱和度(纯度)、对比度、透明度等信息而获得的任意一种色彩,而前述标志形状可以为多种形式,可以是细长柱体、分段式细长柱体和彗尾状标志等任意一种可以描述方向的标志。
更进一步地,基于上述可以目标点运动轨迹的显示效果,本发明其实提供了另一种显示模式,如图22,其中于是可以通过用户输入的模式切换命令,从当前显示模式下,切换到在显示目标点的运动行程轨迹的显示模式,即执行上述通过关联标志依次跨接同一目标点连续移动到三维超声图像数据中的多个相应位置,形成该目标点的运动行程轨迹的步骤而获得的显示模式。
此外,对于可以描绘运动行程轨迹的目标点可以是单个,也可以是多个,而初始位置可以通过用于输入的指令来获取,例如获取用户输入的分布密度指令,依据所述分布密度指令在所述扫描目标内随机选择所述目标点;或者,获取用户输入的标记位置指令,依据所述标记位置指令获得所述目标点。
而在步骤S500中,若基于真三维立体图像显示技术将三维超声图像数据显示为动态的空间立体图像,那么在空间立体图像中标记目标点随时间变化的流体速度矢量信息的方式,例如如何配置色彩、标识的形状等,可参见前文在体图像数据中标记目标点的流体速度矢量信息的方法,在此不再累述。当然,基于真三维立体图像显示技术将所述三维超声图像数据显示为动态的空间立体图像,并在所述空间立体图像中标记目标点随时间变化的流体速度矢量信息,获得所述体图像数据的过程中,还可以包括以下技术方案:
即在每帧三维超声图像数据中标记目标点在相应位置处的流体速度矢量信息,获得包含流体速度矢量标识的单帧体图像,随时间连续的多帧体图像构成可用于真三维立体图像显示技术显示的体图像数据。
在步骤S600中,视差图像生成模块将上述体图像数据转化为两路视差图像数据。
例如,在本发明的其中一些实施例中,如图23所示,提取上述体图像数据900中时间相邻的第一时间相位的体图像和第二时间相位的体图像,依据
第一时间相位的体图像以任意视差数N生成一路视差图像数据,依据第二时间相位的体图像以相同的视差数生成另一路视差图像数据,从而获得上述两路视差图像数据。例如,可以依据9视差来分别将第一时间相位的体图像和第二时间相位的体图像转化为两路视差图像数据,每路视差图像数据中包括9个视差图像。还例如,依据2视差来分别将第一时间相位的体图像和第二时间相位的体图像转化为两路视差图像数据,每路视差图像数据中包括2个视差图像。任意视差数可以是大于等于1的自然数,其中,针对每一时间相位的体图像按照预定的视差角逐一移动至相应的视点位置。而在输出相应的视差图像数据时则按照时间相位和视点位置的移动顺序来输出显示两路视差图像数据,例如,在输出两路视差图像数据时,先输出根据第一时间相位的体图像按照视点位置的移动顺序获得的多个视差图像,然后再输出根据时间在后的第二时间相位的体图像,按照视点位置的移动顺序获得的多个视差图像,依次类推,依次输出根据连续的多帧体图像分别对应生成的按照视点位置的移动顺序获得的多个视差图像。
又例如,在本发明的其中一些实施例中,如图24所示,播放上述体图像数据,模拟人的左右眼建立两个观察视角,对播放中的上述体图像数据分别在上述两个观察视角进行拍摄,从而获取上述两路视差图像数据。针对体图像数据中的每一帧都分别通过两个观察视角拍摄转化为两路视差图像数据。在这个针对上述体图像数据的播放和拍摄的过程中可以参考图26所示的效果,在显示器901上显示播放的体图像数据900,然后定位光源的位置、和第1虚拟相机和第2虚拟相机位置,进行两个观察视角的拍摄,从而获取上述两路视差图像数据,用以在显示屏显示装置上输出显示,以使人眼能观察到3D超声图像。显示器901可以是图像处理端的平面显示器,或者是上述显示屏显示装置,当然,图26的过程也可以只在后台主机内部运行,而不显示出来。
上述关于体图像数据转化为两路视差图像数据的方法可以利用软件程序进行算法编程来实现上述视差图像生成模块的功能,例如通过软件编程生成
可以将三维超声图像数据或者上述体图像数据转化为两路图像数据。
当然,也可以通过增设硬件并结合软件程序来实现将上述体图像数据转化为两路视差图像数据。例如,如图25所示,3D图像处理模块在三维超声图像数据中标记目标点随时间变化的流体速度矢量信息,获得包含流体速度矢量标识的所述体图像数据;利用空间立体显示装置基于真三维立体图像显示技术将体图像数据显示为动态的空间立体图像,这里的空间立体显示装置包括基于全息显示技术的全息显示设备和基于体三维显示技术的体像素显示设备中之一;这里的显示可以是实时采集显示,也可以是非实时显示,在非实时显示模式下,可以针对一段时间内采集获得的三维超声图像数据进行显示,并实现暂停、回放、快进等等播放功能。然后,视差图像生成模块12包括第一摄像装置841和第二摄像装置842,第一摄像装置841和第二摄像装置842分别拍摄上述动态的空间立体图像,获得上述两路视差图像数据。第一摄像装置841和第二摄像装置842可以使光学相机、红外相机等等任意一种摄像设备。
在步骤S700中,利用显示屏显示装置8输出显示上述两路视差图像数据以使人眼观察时获得3D超声图像的显示效果。这里的显示屏显示装置8可以基于眼镜类3D显示技术或裸眼式3D显示技术。例如,基于眼镜类3D显示技术,显示屏显示装置8可以包括用于接收并显示所述两路视差图像数据的显示屏和穿戴式眼镜。眼镜类3D显示技术,主要是利用光学原理的特制眼镜来实现。目前市场应用的眼镜类3D,从技术层面来看主要有快门式和偏光式两种,从观看方式来看主要有被动观看和主动观看两种。其中主动观看式的3D眼镜,是利用眼镜本身的主动运作来显示3D效果,有双显示器式3D眼镜类和液晶式3D眼镜类两种。(1)双显示器式3D眼镜类,双显示器式虽然无法提供多人观看的需求,但仍就算是主动式3D眼镜类的一种,其原理是运用左右眼镜中配置的两组小型显示器来分别显示左右画面,以形成3D效果。(2)液晶式3D眼镜类,由主动液晶镜片构成,其原理是利用电场来改变液晶透光的状态,以每秒数十次的频率交替遮蔽左右两眼视线,播放时
只要交替显示左右两眼画面,再利用同步信号让液晶式3D眼镜与画面同步工作,当播出左眼画面时让右眼镜片变黑、播出右眼画面时让左眼镜片变黑,最终形成3D效果,但这种交替遮蔽会影响画面的亮度。两路视差图像数据其实就是模拟分别进入人左右眼的图像,至于如何输出显示两路视差图像数据以获得眼镜类3D显示效果,可参见相关现有技术,在此不作累述。
还例如,基于裸眼式3D显示技术,显示屏显示装置8可以包括用于接收并显示所述两路视差图像数据的裸眼3D显示屏。
裸眼式3D显示技术,组合了目前最新面板制造技术和引擎软件技术,一方面,在生产制造方面,采用在液晶面板前方配置双凸透镜的全景图像(Integral Imaging)方式显示,即在同一个屏幕上,以分割区域显示(空间多功裸眼3D技术)和切割时间显示(分时多功裸眼3D技术)来实现3D显示。另一方面,在图像显示方面,通过计算机图像处理技术,将已有的2D图像和3D图像的左右两眼的视差,转换为9视差的3D图像。从当前裸眼类3D显示技术形式来看,有光屏障式(Barrier,光屏障式又称光屏蔽式、视差障壁(Parallax Barrier)、视差屏障(Parallax Barriers)等等)、柱状透镜(Lenticular Lens,柱状透镜技术,又被称为双凸透镜或微柱透镜技术,该技术是通过在液晶面板上加上特殊的精密柱面透镜屏,将经过编码处理的3D图像独立送入人的左右两眼,从而可以裸眼体验3D,同时兼容2D,它相比光屏障式技术最大的优点是其亮度不会受到影响,但观测视角宽度会稍小。)、多层显示(Multi Layer Display,这种MLD技术能够通过一定间隔重叠的两块液晶面板,实现裸眼观看3D文字及3D图像)、深度融合式3D显示(Depth-fused 3D,即将两片液晶面板前后重叠在一起,分别在前后两片液晶面板上以不同亮度显示前景与后景的影像,藉由实体的深浅差异来呈现出景深效果。)和指向光源(Directional Backlight,该方法是通过搭配两组快速反应的LCD面板和驱动器,让3D图像以排序方式进入观看者的左右两眼,由于互换的左右两幅图像存在着视差,进而让人眼感受到3D效果。)等几种。两路视差图像数据其实就是模拟分别进入人左右眼的图像,至于如何
输出显示两路视差图像数据以获得裸眼类3D显示效果,可参见相关现有技术,在此不作累述。如图27(b)中给出了裸眼看显示屏显示装置8上显示的图像时,所获得的3D超声图像中呈现流动状血流速度矢量标示符的视觉效果图,图27(a)中给出了裸眼看显示屏显示装置8上显示的图像时,所获得的3D超声图像中呈现翻滚状团簇体的视觉效果图。
图8(即图8(a)和图8(b))为本发明一些实施例的超声成像方法的流程示意图。应该理解的是,虽然图8的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图8中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分并行执行或者交替地执行。
以上各个实施例在具体说明中仅只针对相应步骤的实现方式进行了阐述,然后在逻辑不相矛盾的情况下,上述各个实施例是可以相互组合的而形成新的技术方案的,而该新的技术方案依然在本具体实施方式的公开范围内。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品承载在一个非易失性计算机可读存储载体(如ROM、磁碟、光盘、服务器云空间)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
基于上述超声成像显示方法,本发明还提供了一种超声成像系统,其包括:
探头1;
发射电路2,用于激励上述探头1向扫描目标发射体超声波束;
接收电路4和波束合成模块5,用于接收上述体超声波束的回波,获得体超声回波信号;
数据处理模块9,用于根据上述体超声回波信号,获取上述扫描目标的至少一部分的三维超声图像数据,并基于上述体超声回波信号,获得上述扫描目标内目标点的流体速度矢量信息;
3D图像处理模块11,用于在上述三维超声图像数据中标记目标点的流体速度矢量信息形成流体速度矢量标识,获得包含流体速度矢量标识的体图像数据;
视差图像生成模块12,用于将上述体图像数据转化为两路视差图像数据;及显示屏显示装置8,用于接收上述两路视差图像数据并显示。
上述发射电路2用于执行上述步骤S100,接收电路4和波束合成模块5用于执行上述步骤S200,上述数据处理模块9包括信号处理模块6和/或图像处理模块7,信号处理模块6用于执行上述有关速度分矢量和流体速度矢量信息的计算过程,即前述步骤S400,而图像处理模块7用于执行上述有关图像处理的过程,即前述步骤S300根据上述获得的上述体超声回波信号,获取上述扫描目标的至少一部分的三维超声图像数据。3D图像处理模块11用于执行上述步骤S500,视差图像生成模块12用于执行步骤S600。显示屏显示装置8进行3D超声成像显示,执行上述步骤S700。上述各个功能模块的执行步骤参见前述有关超声成像显示方法的相关步骤说明,在此不累述。
在本发明的一些实施例中,3D图像处理模块11还用于标记目标点连续移动到三维超声图像数据中相应位置处时依次对应获得的流体速度矢量,以使所述流体速度矢量标识在输出显示时呈现随时间变化的流动状视觉效果。
在本发明的一些实施例中,显示屏显示装置8包括:用于接收并显示所述两路视差图像数据的显示屏和穿戴式眼镜,或者用于接收并显示所述两路视差图像数据的裸眼3D显示屏。具体说明可参见前文中的相关说明。
在本发明的一些实施例中,采用体平面超声波束的回波信号来计算有关
流体速度分矢量和流体速度矢量信息、以及三维超声图像数据。例如,发射电路用于激励探头向扫描目标发射体平面超声波束;接收电路和波束合成模块用于接收平面体超声波束的回波,获得体平面超声回波信号;数据处理模块还用于根据体平面超声回波信号,获取扫描目标的至少一部分的三维超声图像数据和目标点的流体速度矢量信息。
还比如,采用体平面超声波束的回波信号来计算有关速度分矢量和流体速度矢量信息,而利用体聚焦超声波束的回波信号来获得高质量的超声图像,于是,上述发射电路激励所述探头向扫描目标发射体聚焦超声波束;上述接收电路和波束合成模块用于接收上述体聚焦超声波束的回波,获得体聚焦超声回波信号;上述数据处理模块用于根据体聚焦超声回波信号,获取所述扫描目标的至少一部分的三维超声图像数据。此外,上述发射电路激励所述探头向扫描目标发射体平面超声波束,在向扫描目标发射平面超声波束的过程中插入所述向扫描目标发射体聚焦超声波束的过程;上述接收电路和波束合成模块用于接收上述体平面超声波束的回波,获得体平面超声回波信号;上述数据处理模块用于根据体平面超声回波信号,获得所述扫描目标内的目标点的流体速度矢量信息。至于这两种波束类型的交替执行发射的方式参见前述相关内容,在此不累述。
此外,数据处理模块还用于根据体超声回波信号,通过灰阶血流成像技术,获得扫描目标的至少一部分的增强型三维超声图像数据。3D图像处理模块还用于分割上述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得云朵状的团簇体区域块,在上述三维超声图像数据中标记上述云朵状的团簇体区域块以显示,获得包含团簇体的体图像数据,以使所述团簇体在输出显示时呈现随时间变化的翻滚状视觉效果。具体实现方式参见前文相关说明。
又如,在本发明的一些实施例中,如图1中,系统中还包括:人机交互设备,用于获取用户输入的命令;3D图像处理模块还用于至少执行以下步骤中之一:
通过移动图像中显示的光标、或者通过手势输入来选择分布密度,获取用户输入的分布密度指令,依据上述分布密度指令在上述扫描目标内随机选择上述目标点;
通过移动图像中显示的光标、或者通过手势输入来选择目标点位置,获取用户输入的标记位置指令,依据上述标记位置指令获得上述目标点;
根据预先设定的分布密度在上述扫描目标内随机选择上述目标点;
获取用户输入的模式切换指令,从当前显示模式切换到输出显示团簇体使其呈现随时间变化的翻滚状视觉效果的显示模式,其中,通过灰阶血流成像技术获得增强型三维超声图像数据,并从增强型三维超声图像数据中分割表征流体区域的感兴趣区域从而获得所述团簇体;
根据用户输入的命令,对上述三维超声图像数据分层次设置不同的透明度
根据用户输入的命令,配置上述体图像数据中包括的根据解剖学组织结构及层次关系用以呈现各个组织结构的立体图像区域的色彩参数;
根据用户输入的命令,配置在上述流体速度矢量标识的颜色、立体形状、透明度中的其中一种或者两种参数组合;
根据用户输入的命令,配置团簇体区域块的色彩信息;
根据用户输入的命令,配置关联标志的色彩信息及形状参数;
根据用户输入的命令,配置在上述3D图像中显示的光标的位置或参数,其中,上述显示屏显示装置还用于在图像中显示的光标;和
根据用户输入的命令,切换上述发射电路用于激励上述探头向扫描目标发射体超声波束的类型。
以上有关3D图像处理模块根据用户输入的命令执行相应操作的步骤参见前文相关内容上述,在此不再累述。
在本发明的一些实施例中,如图25所示,3D图像处理模块用于在上述三维超声图像数据中标记目标点随时间变化的流体速度矢量信息,获得包含流体速度矢量标识的上述体图像数据;上述系统还包括空间立体显示装置
800,用于基于真三维立体图像显示技术将上述体图像数据显示为动态的空间立体图像,上述空间立体显示装置800包括基于全息显示技术的全息显示设备和基于体三维显示技术的体像素显示设备中之一;上述视差图像生成模块包括第一摄像装置841和第二摄像装置842,第一摄像装置841和第二摄像装置842从两个角度拍摄上述动态的空间立体图像,获得上述两路视差图像数据。第一摄像装置和第二摄像装置可以是相同结构,例如,都是红外摄像机、光学照相机等。
上述空间立体显示装置8包括基于全息显示技术的全息显示设备和基于体三维显示技术的体像素显示设备中之一。具体可参见前文中有关步骤S500中的相关说明,如图15至图17所示。
在本发明的一些实施例中,如图25所示,上述人机交互设备10包括:与数据处理模块连接的带有触摸显示屏的电子设备840。该电子设备840通过通讯接口(无线或有线通讯接口)与数据处理模块9相连,用于接收三维超声图像数据和目标点的流体速度矢量信息用以在触摸显示屏上显示,呈现超声图像(该超声图像可以是基于三维超声图像数据显示的二维或三维超声图像)及叠加在超声图像上的流体速度矢量信息;接收用户在触摸显示屏输入的操作命令,并将该操作命令传输给数据处理模块9,这里的操作命令可以包括上述数据处理模块9所依据的任何一种或几种用户输入的命令;数据处理模块9用于根据操作命令获得相关配置或切换指令,并传输给空间立体显示装置800;空间立体显示装置800用于根据配置或切换指令,调整空间立体图像的显示结果,用以在空间立体图像上同步显示根据用户在触摸显示屏输入的操作命令,而执行的图像旋转、图像参数配置、图像显示模式切换等控制结果。图25所示,空间立体显示装置800采用图15所示的全息显示设备,那么通过在与数据处理模块9相连的电子设备840上同步显示超声图像及叠加在超声图像上的流体速度矢量信息,从而提供观察者用户输入操作命令的一种方式,并通过该方式与显示的空间立体图像进行交互。
此外,在本发明的一些实施例中,人机交互设备10还可以是物理操作键
(如键盘、操作杆、滚轮等)、虚拟键盘、或如带摄像头的手势输入设备等。这里的手势输入设备包括:通过采集手势输入时的图像,并利用图像识别技术来跟踪手势输入的设备,例如通过红外摄像头采集手势输入的图像来利用图像识别技术获得手势输入所代表的操作指令。
基于上述实施例,本发明还提供了一种三维超声流体成像系统,其包括:
探头1;
发射电路2,用于激励上述探头1向扫描目标发射体超声波束;
接收电路4和波束合成模块5,用于接收上述体超声波束的回波,获得体超声回波信号;
数据处理模块9,用于根据上述体超声回波信号,通过灰阶血流成像技术,获得上述扫描目标的至少一部分的增强型三维超声图像数据;
3D图像处理模块11,用于分割上述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得云朵状的团簇体区域块,在上述三维超声图像数据中标记上述云朵状的团簇体区域块,获得包含云朵状团簇体的体图像数据;
视差图像生成模块12,用于将上述体图像数据转化为两路视差图像数据;
显示屏显示装置8,用于输出显示上述两路视差图像数据,以使人眼能够观察到随时间变化呈翻滚状的团簇体的视觉效果。
上述发射电路2用于执行上述步骤S100,接收电路4和波束合成模块5用于执行上述步骤S200,上述数据处理模块9包括信号处理模块6和/或图像处理模块7,信号处理模块6用于执行对合成的回波信号的处理,而图像处理模块7用于执行上述有关增强型三维超声图像数据的图像处理过程,即前述步骤S310根据上述预设时间段内获得的上述体超声回波信号,获取上述扫描目标的至少一部分的三维超声图像数据。3D图像处理模块11用于执行上述步骤S510中针对增强型三维超声图像数据中团簇体的分割和标记处理过程,视差图像生成模块12用于执行步骤S600。显示屏显示装置8进行3D超声成像显示,执行上述步骤S700。上述各个功能模块的执行步骤参见前述有
关超声成像显示方法的相关步骤说明,在此不累述。
在本发明的一些实施例中,3D图像处理模块还用于将上述三维超声图像数据转化为透视效果的体图像数据,并在体图像数据中标记随时间变化的云朵状的团簇体区域块。
在本发明的一些实施例中,3D图像处理模块还用于:
对每帧三维超声图像数据分层次设置不同透明度,并在每帧三维超声图像数据中标记云朵状的团簇体区域块,获得包含云朵状团簇体的单帧体图像,随时间连续的多帧体图像构成上述体图像数据;或,
基于三维绘图软件将每帧三维超声图像数据转换成一副三维透视效果图,并在每副三维效果图中标记云朵状的团簇体区域块,获得包含云朵状团簇体的单帧体图像,随时间连续的多帧体图像构成上述体图像数据。
在本发明的一些实施例中,上述3D图像处理模块还用于执行以下步骤来将上述三维超声图像数据转化为透视效果的体图像数据:
对上述三维超声图像数据作平行切面或作同心球切面,将每个切面设置为不同的透明度或者多个切面依次设置阶梯式递变的透明度;和/或,
对上述三维超声图像数据进行组织结构分割,将分割获得的组织结构区域设置不同的透明度。
在本发明的一些实施例中,上述3D图像处理模块还用于:
在上述分割上述增强型三维超声图像数据中用以表征流体区域的感兴趣区域获得云朵状的团簇体区域块的步骤中,基于图像灰度分割上述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得不同灰度特征的团簇体区域块,并在上述三维超声图像数据中通过不同的色彩渲染上述不同灰度特征团簇体区域块;或者,
对分割获得的同一云朵状的团簇体区域块,按照上述团簇体区域块内不同区域体的灰度变化叠加不同的色彩进行渲染。
上述有关3D图像处理模块的功能可参见前文中的相关说明。
综上所述,本发明突破现有超声成像系统在血流显像技术上的不足,提
供了一种三维超声流体成像方法及超声成像系统,可适用于对血流信息的成像与显示,其通过3D立体显示技术借助目前较为先进的显示屏,为用户提供了更好的3D超声图像的观察视角,实现了既能够实时的了解扫描位置,且还可以使图像显示效果更加真实的显现血流信息,并真实再现扫描目标内流体运动的情况,为用户提供多角度、全方位的观测视角,为医护人员提供更为全面、更为精准的图像数据,为在超声系统上实现的血流成像显示技术开创了又一更加新型的血流成像显示方式。此外,本发明还提供一种新型的计算目标点流体速度矢量信息的显示方法,其能够更加真实的提供流体实际流动状态的情况数据,并直观的体现目标点沿流向方向和根据流向移动的轨迹。同时,本发明还提供了更加个性化的自定义服务,为方便用户观察真实的流体状态提供更为精确、更为直观化的数据支持。
本发明还提供了一种可以在超声立体图像呈现灰阶增强效果的显示模式,其中用不同色彩表征感兴趣区域的灰度变化的图像,并动态展现团簇区域的流动情况,相对传统的显示方式,本发明的3D显示效果,更加生动、更加真实、信息量更加丰富。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进、组合,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。
Claims (43)
- 一种三维超声流体成像方法,其包括:向扫描目标发射体超声波束;接收所述体超声波束的回波,获得体超声回波信号;根据所述体超声回波信号,获取所述扫描目标的至少一部分的三维超声图像数据;基于所述体超声回波信号,获得所述扫描目标内目标点的流体速度矢量信息;在所述三维超声图像数据中标记目标点的流体速度矢量信息形成流体速度矢量标识,获得包含流体速度矢量标识的体图像数据;将所述体图像数据转化为两路视差图像数据;输出显示所述两路视差图像数据。
- 根据权利要求1所述的三维超声流体成像方法,其特征在于,所述目标点的流体速度矢量信息包括:所述目标点连续移动到所述三维超声图像数据中相应位置处而依次对应获得的流体速度矢量,以使所述流体速度矢量标识在输出显示时呈现随时间变化的流动状视觉效果。
- 根据权利要求1所述的三维超声流体成像方法,其特征在于,所述在所述三维超声图像数据中标记目标点的流体速度矢量信息形成流体速度矢量标识,获得包含流体速度矢量标识的体图像数据的步骤中,将所述三维超声图像数据转化为透视效果的体图像数据,并在体图像数据中标记目标点随时间变化的流体速度矢量信息,形成可随时间变化的所述流体速度矢量标识。
- 根据权利要求1所述的三维超声流体成像方法,其特征在于,所述在所述三维超声图像数据中标记目标点的流体速度矢量信息形成流体速度矢量标识,获得包含流体速度矢量标识的体图像数据的步骤包括:对每帧三维超声图像数据分层次设置不同透明度,并在每帧三维超声图像数据中标记目标点在相应位置处的流体速度矢量信息,获得包含流体速度 矢量标识的单帧体图像,随时间连续的多帧体图像构成所述体图像数据;或,基于三维绘图软件将每帧三维超声图像数据转换成一副三维透视效果图,并在每副三维效果图中标记目标点相应位置处的流体速度矢量信息,获得包含流体速度矢量标识的单帧体图像,随时间连续的多帧体图像构成所述体图像数据;或,基于真三维立体图像显示技术将所述三维超声图像数据显示为动态的空间立体图像,并在所述空间立体图像中标记目标点随时间变化的流体速度矢量信息,获得所述体图像数据。
- 根据权利要求4所述的三维超声流体成像方法,其特征在于,所述对所述三维超声图像数据分层次设置不同透明度的步骤包括:对所述三维超声图像数据作平行切面或作同心球切面,将每个切面设置为不同的透明度或者多个切面依次设置阶梯式递变的透明度;和/或,对所述三维超声图像数据进行组织结构分割,将分割获得的组织结构区域设置不同的透明度。
- 根据权利要求1所述的三维超声流体成像方法,其特征在于,所述将所述体图像数据转化为两路视差图像数据的步骤包括:提取所述体图像数据中时间相邻的第一时间相位的体图像和第二时间相位的体图像,依据第一时间相位的体图像以任意视差数生成一路视差图像数据,依据第二时间相位的体图像以相同的视差数生成另一路视差图像数据,从而获得所述两路视差图像数据;或者,播放所述体图像数据,模拟人的左右眼建立两个观察视角,对播放中的所述体图像数据分别在所述两个观察视角进行拍摄,从而获取所述两路视差图像数据。
- 根据权利要求1所述的三维超声流体成像方法,其特征在于,所述获取所述扫描目标的至少一部分的三维超声图像数据的步骤中,还包括:通过灰阶血流成像技术,获得所述扫描目标的至少一部分的增强型三维超声图像数据;执行所述在所述三维超声图像数据中标记目标点的流体速度矢量信息,获得包含流体速度矢量标识的体图像数据的步骤中,包括:分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得云朵状的团簇体区域块;在所述三维超声图像数据中标记所述云朵状的团簇体区域块形成团簇体,获得包含团簇体的体图像数据,以使所述团簇体在输出显示时呈现随时间变化的翻滚状视觉效果。
- 根据权利要求7所述的三维超声流体成像方法,其特征在于,所述方法中还包括在所述云朵状的团簇体区域块上叠加色彩信息的步骤,该步骤包括:基于图像灰度分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得不同灰度特征的团簇体区域块,并在所述三维超声图像数据中通过不同的色彩渲染所述不同灰度特征团簇体区域块;或者,对于同一云朵状的团簇体区域块,按照所述团簇体区域块内不同区域体的灰度变化叠加不同的色彩进行渲染;或者,按照团簇体区域块所表征的流体区域的速度信息,在该团簇体区域块上叠加相应设置的色彩信息。
- 根据权利要求1所述的三维超声流体成像方法,其特征在于,所述流体速度矢量标识采用立体标志物,并通过立体标志物的体积大小或旋转速度来表示流体速度矢量的大小,和/或,通过所述立体标志物上的箭头指向、方向指引部的指向或使立体标志物随时间移动来表征流体速度矢量的方向。
- 根据权利要求1所述的三维超声流体成像方法,其特征在于,所述基于所述体超声回波信号获得所述扫描目标内目标点的流体速度矢量信息的过程中,所述目标点通过执行以下步骤之一来选择:通过移动图像中显示的光标、或者通过手势输入来选择分布密度,获取用户输入的分布密度指令,依据所述分布密度指令在所述扫描目标内随机选择所述目标点;通过移动图像中显示的光标、或者通过手势输入来选择目标点位置,获取用户输入的标记位置指令,依据所述标记位置指令获得所述目标点;和根据预先设定的分布密度在所述扫描目标内随机选择所述目标点。
- 根据权利要求2所述的三维超声流体成像方法,其特征在于,执行所述在所述三维超声图像数据中标记目标点的流体速度矢量信息的步骤时,还包括:通过关联标志依次跨接同一目标点连续移动到所述三维超声图像数据中的多个相应位置,形成该目标点的运动行程轨迹,用以在输出显示时显示所述运动行程轨迹。
- 根据权利要求11所述的三维超声流体成像方法,其特征在于,所述关联标志包括细长柱体、分段式细长柱体或彗尾状标志。
- 根据权利要求1所述的三维超声流体成像方法,其特征在于,所述基于所述体超声回波信号获得所述扫描目标内目标点的流体速度矢量信息的步骤包括:根据体超声回波信号获得至少两帧三维超声图像数据;根据三维超声图像数据获得在目标点处沿时间方向的梯度,根据三维超声图像数据获得在目标点处沿超声波传播方向的第一速度分量;根据所述梯度和所述第一速度分量,分别获得在目标点处沿第一方向的第二速度分量和沿第二方向上的第三速度分量,所述第一方向、第二方向与超声波传播方向两两相互垂直;根据第一速度分量、第二速度分量和第三速度分量合成获得目标点的流体速度矢量。
- 根据权利要求1所述的三维超声流体成像方法,其特征在于,所述方法中,从向扫描目标发射体超声波束的步骤开始到获取三维超声图像数据和目标点的流体速度矢量信息的过程包括:向扫描目标发射体平面超声波束,接收所述体平面超声波束的回波,获得体平面超声回波信号,根据所述体平面超声回波信号,获取所述三维超声图像数据,基于所述体平面超声回波信号,获得所述目标点的流体速度矢量信息;或者,向扫描目标分别发射体平面超声波束和体聚焦超声波束,接收所述体平面超声波束的回波,获得体平面超声回波信号,接收所述体聚焦超声波束的回波,获得体聚焦超声回波信号,根据所述体聚焦超声回波信号,获取所述三维超声图像数据,基于所述体平面超声回波信号,获得所述目标点的流体速度矢量信息。
- 根据权利要求1所述的三维超声流体成像方法,其特征在于,所述接收所述体超声波束的回波获得体超声回波信号的步骤包括:接收来自多个扫描体上体超声波束的回波,获得多组体超声回波信号,其中激励超声波发射阵元沿多个超声波传播方向向扫描目标发射体超声波束,使所述体超声波束在扫描目标所在的空间内传播用以形成多个扫描体;所述基于所述体超声回波信号获得所述扫描目标内目标点的流体速度矢量信息的步骤包括:基于所述多组体超声回波信号中的一组体超声回波信号,计算所述扫描目标内目标点的一个速度分量,依据所述多组体超声回波信号分别获取多个速度分量;根据多个速度分量,合成获得所述目标点的流体速度矢量,生成所述目标点的流体速度矢量信息。
- 一种三维超声流体成像方法,其包括:向扫描目标发射体超声波束;接收所述体超声波束的回波,获得体超声回波信号;根据所述体超声回波信号,通过灰阶血流成像技术,获得所述扫描目标的至少一部分的增强型三维超声图像数据;分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域, 获得云朵状的团簇体区域块;在所述三维超声图像数据中标记所述云朵状的团簇体区域块形团簇体成,获得包含团簇体的体图像数据;将所述体图像数据转化为两路视差图像数据;输出显示所述两路视差图像数据,以使所述团簇体在输出显示时呈现随时间变化的翻滚状视觉效果。
- 根据权利要求16所述的三维超声流体成像方法,其特征在于,所述在所述三维超声图像数据中标记所述云朵状的团簇体区域块,获得包含云朵状团簇体的体图像数据的步骤中,将所述三维超声图像数据转化为透视效果的体图像数据,并在体图像数据中标记随时间变化的云朵状的团簇体区域块。
- 根据权利要求16所述的三维超声流体成像方法,其特征在于,所述在所述三维超声图像数据中标记所述云朵状的团簇体区域块,获得包含云朵状团簇体的体图像数据的步骤包括:对每帧三维超声图像数据分层次设置不同透明度,并在每帧三维超声图像数据中标记云朵状的团簇体区域块,获得包含云朵状团簇体的单帧体图像,随时间连续的多帧体图像构成所述体图像数据;或,基于三维绘图软件将每帧三维超声图像数据转换成一副三维透视效果图,并在每副三维效果图中标记云朵状的团簇体区域块,获得包含云朵状团簇体的单帧体图像,随时间连续的多帧体图像构成所述体图像数据;或,基于真三维立体图像显示技术将所述三维超声图像数据显示为动态的空间立体图像,并在所述空间立体图像中标记随时间变化的云朵状的团簇体区域块,获得所述体图像数据。
- 根据权利要求18所述的三维超声流体成像方法,其特征在于,所述对所述三维超声图像数据分层次设置不同透明度的步骤包括:对所述三维超声图像数据作平行切面或作同心球切面,将每个切面设置为不同的透明度或者多个切面依次设置阶梯式递变的透明度;和/或,对所述三维超声图像数据进行组织结构分割,将分割获得的组织结构区 域设置不同的透明度。
- 根据权利要求16所述的三维超声流体成像方法,其特征在于,所述将所述体图像数据转化为两路视差图像数据的步骤包括:提取所述体图像数据中时间相邻的第一时间相位的体图像和第二时间相位的体图像,依据第一时间相位的体图像以任意视差数生成一路视差图像数据,依据第二时间相位的体图像以相同的视差数生成另一路视差图像数据,从而获得所述两路视差图像数据;或者,播放所述体图像数据,模拟人的左右眼建立两个观察视角,对播放中的所述体图像数据分别在所述两个观察视角进行拍摄,从而获取所述两路视差图像数据。
- 根据权利要求16所述的三维超声流体成像方法,其特征在于,所述方法中还包括在云朵状的团簇体区域块上叠加色彩信息的步骤,该步骤包括:基于图像灰度分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得不同灰度特征的团簇体区域块,并在所述三维超声图像数据中通过不同的色彩渲染所述不同灰度特征团簇体区域块;或者,对于同一云朵状的团簇体区域块,按照所述团簇体区域块内不同区域体的灰度变化叠加不同的色彩进行渲染;或者,按照团簇体区域块所表征的流体区域的速度信息,在该团簇体区域块上叠加相应设置的色彩信息。
- 一种三维超声流体成像系统,其特征在于,包括:探头;发射电路,用于激励所述探头向扫描目标发射体超声波束;接收电路和波束合成模块,用于接收所述体超声波束的回波,获得体超声回波信号;数据处理模块,用于根据所述体超声回波信号,获取所述扫描目标的至少一部分的三维超声图像数据,并基于所述体超声回波信号,获得所述扫描 目标内目标点的流体速度矢量信息;3D图像处理模块,用于在所述三维超声图像数据中标记目标点的流体速度矢量信息形成流体速度矢量标识,获得包含流体速度矢量标识的体图像数据;视差图像生成模块,用于将所述体图像数据转化为两路视差图像数据;及显示屏显示装置,用于接收所述两路视差图像数据并显示。
- 根据权利要求22所述的三维超声流体成像系统,其特征在于,所述3D图像处理模块还用于标记所述目标点连续移动到所述三维超声图像数据中相应位置处时依次对应获得的流体速度矢量,以使所述流体速度矢量标识在输出显示时呈现随时间变化的流动状视觉效果。
- 根据权利要求22所述的三维超声流体成像系统,其特征在于,所述显示屏显示装置包括:用于接收并显示所述两路视差图像数据的显示屏和穿戴式眼镜,或者用于接收并显示所述两路视差图像数据的裸眼3D显示屏。
- 根据权利要求22所述的三维超声流体成像系统,其特征在于,所述3D图像处理模块还用于:将所述三维超声图像数据转化为透视效果的体图像数据,并在体图像数据中标记目标点随时间变化的流体速度矢量信息,形成可随时间变化的所述流体速度矢量标识。
- 根据权利要求22所述的三维超声流体成像系统,其特征在于,所述3D图像处理模块还用于:对每帧三维超声图像数据分层次设置不同透明度,并在每帧三维超声图像数据中标记目标点在相应位置处的流体速度矢量信息,获得包含流体速度矢量标识的单帧体图像,随时间连续的多帧体图像构成所述体图像数据;或,基于三维绘图软件将每帧三维超声图像数据转换成一副三维透视效果图,并在每副三维效果图中标记目标点相应位置处的流体速度矢量信息,获得包含流体速度矢量标识的单帧体图像,随时间连续的多帧体图像构成所述 体图像数据。
- 根据权利要求22所述的三维超声流体成像系统,其特征在于,所述3D图像处理模块还用于通过以下过程对所述三维超声图像数据分层次设置不同透明度:对所述三维超声图像数据作平行切面或作同心球切面,将每个切面设置为不同的透明度或者多个切面依次设置阶梯式递变的透明度;和/或,对所述三维超声图像数据进行组织结构分割,将分割获得的组织结构区域设置不同的透明度。
- 根据权利要求22所述的三维超声流体成像系统,其特征在于,所述视差图像生成模块用于:提取所述体图像数据中时间相邻的第一时间相位的体图像和第二时间相位的体图像,依据第一时间相位的体图像以任意视差数生成一路视差图像数据,依据第二时间相位的体图像以相同的视差数生成另一路视差图像数据,从而获得所述两路视差图像数据;或者,播放所述体图像数据,模拟人的左右眼建立两个观察视角,对播放中的所述体图像数据分别在所述两个观察视角进行拍摄,从而获取所述两路视差图像数据。
- 根据权利要求22所述的三维超声流体成像系统,其特征在于,所述数据处理模块还用于根据所述体超声回波信号,通过灰阶血流成像技术,获得所述扫描目标的至少一部分的增强型三维超声图像数据;所述3D图像处理模块还用于分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得云朵状的团簇体区域块,在所述三维超声图像数据中标记所述云朵状的团簇体区域块以显示,获得包含团簇体的体图像数据,以使所述团簇体在输出显示时呈现随时间变化的翻滚状视觉效果。
- 根据权利要求29所述的三维超声流体成像系统,其特征在于,所述3D图像处理模块还用于基于图像灰度分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得不同灰度特征的团簇体区域块,并在 所述三维超声图像数据中通过不同的色彩渲染所述不同灰度特征团簇体区域块;或者,对于分割获得的同一云朵状的团簇体区域块,按照所述团簇体区域块内不同区域体的灰度变化叠加不同的色彩进行渲染;或者,按照团簇体区域块所表征的流体区域的速度信息,在该团簇体区域块上叠加相应设置的色彩信息。
- 根据权利要求22所述的三维超声流体成像系统,其特征在于,所述流体速度矢量标识采用立体标志物,并通过立体标志物的体积大小或旋转速度来表示流体速度矢量的大小,和/或,通过所述立体标志物上的箭头指向、方向指引部的指向或使立体标志物随时间移动来表征流体速度矢量的方向。
- 根据权利要求23所述的三维超声流体成像系统,其特征在于,所述3D图像处理模块还用于:通过关联标志依次跨接同一目标点连续移动到所述三维超声图像数据中的多个相应位置,形成该目标点的运动行程轨迹,用以在输出显示时显示所述运动行程轨迹。
- 根据权利要求22所述的三维超声流体成像系统,其特征在于,所述系统中,所述发射电路用于激励所述探头向扫描目标发射体平面超声波束,所述接收电路和波束合成模块用于接收接收所述体平面超声波束的回波,获得体平面超声回波信号,所述数据处理模块用于根据所述体平面超声回波信号,获取所述三维超声图像数据,并基于所述体平面超声回波信号,获得所述目标点的流体速度矢量信息;或者,所述发射电路用于激励所述探头向扫描目标分别发射体平面超声波束和体聚焦超声波束,所述接收电路和波束合成模块用于接收所述体平面超声波束的回波,获得体平面超声回波信号,所述数据处理模块用于接收所述体聚焦超声波束的回波,获得体聚焦超声回波信号,根据所述体聚焦超声回波信号,获取所述三维超声图像数据,而基于所述体平面超声回波信号,获得所述目标点的流体速度矢量信息。
- 根据权利要求22所述的三维超声流体成像系统,其特征在于,所述 3D图像处理模块用于在所述三维超声图像数据中标记目标点随时间变化的流体速度矢量信息,获得包含流体速度矢量标识的所述体图像数据;所述系统还包括:空间立体显示装置,用于基于真三维立体图像显示技术将所述体图像数据显示为动态的空间立体图像,所述空间立体显示装置包括基于全息显示技术的全息显示设备和基于体三维显示技术的体像素显示设备中之一;所述视差图像生成模块包括第一摄像装置和第二摄像装置,第一摄像装置和第二摄像装置从两个角度拍摄所述动态的空间立体图像,获得所述两路视差图像数据。
- 一种三维超声流体成像系统,其特征在于,包括:探头;发射电路,用于激励所述探头向扫描目标发射体超声波束;接收电路和波束合成模块,用于接收所述体超声波束的回波,获得体超声回波信号;数据处理模块,用于根据所述体超声回波信号,通过灰阶血流成像技术,获得所述扫描目标的至少一部分的增强型三维超声图像数据;3D图像处理模块,用于分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获得云朵状的团簇体区域块,在所述三维超声图像数据中标记所述云朵状的团簇体区域块,获得包含云朵状团簇体的体图像数据;视差图像生成模块,用于将所述体图像数据转化为两路视差图像数据;显示屏显示装置,用于输出显示所述两路视差图像数据,以使所述团簇体在输出显示时呈现随时间变化的翻滚状视觉效果。
- 根据权利要求35所述的三维超声流体成像方法,其特征在于,所述3D图像处理模块还用于将所述三维超声图像数据转化为透视效果的体图像数据,并在体图像数据中标记随时间变化的云朵状的团簇体区域块。
- 根据权利要求35所述的三维超声流体成像方法,其特征在于,所述 3D图像处理模块还用于:对每帧三维超声图像数据分层次设置不同透明度,并在每帧三维超声图像数据中标记云朵状的团簇体区域块,获得包含云朵状团簇体的单帧体图像,随时间连续的多帧体图像构成所述体图像数据;或,基于三维绘图软件将每帧三维超声图像数据转换成一副三维透视效果图,并在每副三维效果图中标记云朵状的团簇体区域块,获得包含云朵状团簇体的单帧体图像,随时间连续的多帧体图像构成所述体图像数据。
- 根据权利要求36所述的三维超声流体成像方法,其特征在于,所述3D图像处理模块还用于执行以下步骤来将所述三维超声图像数据转化为透视效果的体图像数据:对所述三维超声图像数据作平行切面或作同心球切面,将每个切面设置为不同的透明度或者多个切面依次设置阶梯式递变的透明度;和/或,对所述三维超声图像数据进行组织结构分割,将分割获得的组织结构区域设置不同的透明度。
- 根据权利要求35所述的三维超声流体成像方法,其特征在于,所述视差图像生成模块还用于:提取所述体图像数据中时间相邻的第一时间相位的体图像和第二时间相位的体图像,依据第一时间相位的体图像以任意视差数生成一路视差图像数据,依据第二时间相位的体图像以相同的视差数生成另一路视差图像数据,从而获得所述两路视差图像数据;或者,播放所述体图像数据,模拟人的左右眼建立两个观察视角,对播放中的所述体图像数据分别在所述两个观察视角进行拍摄,从而获取所述两路视差图像数据。
- 根据权利要求35所述的三维超声流体成像方法,其特征在于,所述3D图像处理模块还用于在所述分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域获得云朵状的团簇体区域块的步骤中,基于图像灰度分割所述增强型三维超声图像数据中用以表征流体区域的感兴趣区域,获 得不同灰度特征的团簇体区域块,并在所述三维超声图像数据中通过不同的色彩渲染所述不同灰度特征团簇体区域块;或者,对分割获得的同一云朵状的团簇体区域块,按照所述团簇体区域块内不同区域体的灰度变化叠加不同的色彩进行渲染;或者按照团簇体区域块所表征的流体区域的速度信息,在该团簇体区域块上叠加相应设置的色彩信息。
- 根据权利要求35所述的三维超声流体成像系统,其特征在于,所述系统中,所述发射电路用于激励所述探头向扫描目标发射体平面超声波束,所述接收电路和波束合成模块用于接收接收所述体平面超声波束的回波,获得体平面超声回波信号,所述数据处理模块用于根据所述体平面超声回波信号,获取所述三维超声图像数据,并基于所述体平面超声回波信号,获得所述目标点的流体速度矢量信息;或者,所述发射电路用于激励所述探头向扫描目标分别发射体平面超声波束和体聚焦超声波束,所述接收电路和波束合成模块用于接收所述体平面超声波束的回波,获得体平面超声回波信号,所述数据处理模块用于接收所述体聚焦超声波束的回波,获得体聚焦超声回波信号,根据所述体聚焦超声回波信号,获取所述三维超声图像数据,而基于所述体平面超声回波信号,获得所述目标点的流体速度矢量信息。
- 根据权利要求35所述的三维超声流体成像系统,其特征在于,所述显示屏显示装置包括:用于接收并显示所述两路视差图像数据的显示屏和穿戴式眼镜,或者用于接收并显示所述两路视差图像数据的裸眼3D显示屏。
- 根据权利要求35所述的三维超声流体成像系统,其特征在于,所述系统还包括空间立体显示装置,用于基于真三维立体图像显示技术将所述体图像数据显示为动态的空间立体图像,所述空间立体显示装置包括基于全息显示技术的全息显示设备和基于体三维显示技术的体像素显示设备中之一;所述视差图像生成模块包括第一摄像装置和第二摄像装置,第一摄像装 置和第二摄像装置从两个角度拍摄所述动态的空间立体图像,获得所述两路视差图像数据。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/086068 WO2017020256A1 (zh) | 2015-08-04 | 2015-08-04 | 三维超声流体成像方法及系统 |
CN201580081287.8A CN107847214B (zh) | 2015-08-04 | 2015-08-04 | 三维超声流体成像方法及系统 |
CN202011478109.8A CN112704516B (zh) | 2015-08-04 | 2015-08-04 | 三维超声流体成像方法及系统 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/086068 WO2017020256A1 (zh) | 2015-08-04 | 2015-08-04 | 三维超声流体成像方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017020256A1 true WO2017020256A1 (zh) | 2017-02-09 |
Family
ID=57943797
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/086068 WO2017020256A1 (zh) | 2015-08-04 | 2015-08-04 | 三维超声流体成像方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN112704516B (zh) |
WO (1) | WO2017020256A1 (zh) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109490896A (zh) * | 2018-11-15 | 2019-03-19 | 大连海事大学 | 一种极端环境三维图像采集处理系统 |
CN111544038A (zh) * | 2020-05-12 | 2020-08-18 | 上海深至信息科技有限公司 | 一种云平台超声成像系统 |
US20210033440A1 (en) * | 2019-07-29 | 2021-02-04 | Supersonic Imagine | Ultrasonic system for detecting fluid flow in an environment |
CN112712487A (zh) * | 2020-12-23 | 2021-04-27 | 北京软通智慧城市科技有限公司 | 一种场景视频融合方法、系统、电子设备及存储介质 |
US11453018B2 (en) | 2019-06-17 | 2022-09-27 | Ford Global Technologies, Llc | Sensor assembly with movable nozzle |
CN117770870A (zh) * | 2024-02-26 | 2024-03-29 | 之江实验室 | 一种基于双线阵超声波场分离的超声成像方法及装置 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117731322A (zh) * | 2018-12-06 | 2024-03-22 | 深圳迈瑞生物医疗电子股份有限公司 | 超声成像方法、设备及可读存储介质 |
CN111358493B (zh) * | 2020-03-09 | 2023-04-07 | 深圳开立生物医疗科技股份有限公司 | 应用于超声波成像的数据处理方法、装置、设备及介质 |
CN111311523B (zh) * | 2020-03-26 | 2023-09-05 | 北京迈格威科技有限公司 | 图像处理方法、装置、系统和电子设备 |
CN112767309B (zh) * | 2020-12-30 | 2024-08-06 | 无锡祥生医疗科技股份有限公司 | 超声扫查方法、超声设备、系统以及存储介质 |
CN113222868B (zh) * | 2021-04-25 | 2023-04-25 | 北京邮电大学 | 图像合成方法及装置 |
CN113362360B (zh) * | 2021-05-28 | 2022-08-30 | 上海大学 | 基于流体速度场的超声颈动脉斑块分割方法 |
CN114209354B (zh) * | 2021-12-20 | 2024-10-01 | 深圳开立生物医疗科技股份有限公司 | 一种超声图像的显示方法、装置、设备及可读存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5779641A (en) * | 1997-05-07 | 1998-07-14 | General Electric Company | Method and apparatus for three-dimensional ultrasound imaging by projecting filtered pixel data |
CN101347341A (zh) * | 2007-07-17 | 2009-01-21 | 阿洛卡株式会社 | 超声波诊断装置 |
CN101584589A (zh) * | 2008-05-20 | 2009-11-25 | 株式会社东芝 | 图像处理装置以及图像处理方法 |
CN102613990A (zh) * | 2012-02-03 | 2012-08-01 | 声泰特(成都)科技有限公司 | 三维超声频谱多普勒的血流速度及其空间分布显示方法 |
CN103181782A (zh) * | 2011-12-29 | 2013-07-03 | 三星麦迪森株式会社 | 超声系统和提供多普勒频谱图像的方法 |
CN103876780A (zh) * | 2014-03-03 | 2014-06-25 | 天津迈达医学科技股份有限公司 | 高频超声血流灰阶成像方法及装置 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1985002105A1 (en) * | 1983-11-10 | 1985-05-23 | Acoustec Partners | Ultrasound diagnostic apparatus |
US6102864A (en) * | 1997-05-07 | 2000-08-15 | General Electric Company | Three-dimensional ultrasound imaging of velocity and power data using average or median pixel projections |
US7141020B2 (en) * | 2002-02-20 | 2006-11-28 | Koninklijke Philips Electronics N.V. | Portable 3D ultrasound system |
JP4060615B2 (ja) * | 2002-03-05 | 2008-03-12 | 株式会社東芝 | 画像処理装置及び超音波診断装置 |
JP4137516B2 (ja) * | 2002-05-20 | 2008-08-20 | 株式会社東芝 | 超音波診断装置 |
US7637871B2 (en) * | 2004-02-26 | 2009-12-29 | Siemens Medical Solutions Usa, Inc. | Steered continuous wave doppler methods and systems for two-dimensional ultrasound transducer arrays |
EP1974672B9 (en) * | 2007-03-28 | 2014-04-16 | Kabushiki Kaisha Toshiba | Ultrasonic imaging apparatus and ultrasonic velocity optimization method |
JP5495607B2 (ja) * | 2008-05-27 | 2014-05-21 | キヤノン株式会社 | 超音波診断装置 |
US9204858B2 (en) * | 2010-02-05 | 2015-12-08 | Ultrasonix Medical Corporation | Ultrasound pulse-wave doppler measurement of blood flow velocity and/or turbulence |
JP6058283B2 (ja) * | 2011-05-26 | 2017-01-11 | 東芝メディカルシステムズ株式会社 | 超音波診断装置 |
WO2013059659A1 (en) * | 2011-10-19 | 2013-04-25 | Verasonics, Inc. | Estimation and display for vector doppler imaging using plane wave transmissions |
-
2015
- 2015-08-04 CN CN202011478109.8A patent/CN112704516B/zh active Active
- 2015-08-04 CN CN201580081287.8A patent/CN107847214B/zh active Active
- 2015-08-04 WO PCT/CN2015/086068 patent/WO2017020256A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5779641A (en) * | 1997-05-07 | 1998-07-14 | General Electric Company | Method and apparatus for three-dimensional ultrasound imaging by projecting filtered pixel data |
CN101347341A (zh) * | 2007-07-17 | 2009-01-21 | 阿洛卡株式会社 | 超声波诊断装置 |
CN101584589A (zh) * | 2008-05-20 | 2009-11-25 | 株式会社东芝 | 图像处理装置以及图像处理方法 |
CN103181782A (zh) * | 2011-12-29 | 2013-07-03 | 三星麦迪森株式会社 | 超声系统和提供多普勒频谱图像的方法 |
CN102613990A (zh) * | 2012-02-03 | 2012-08-01 | 声泰特(成都)科技有限公司 | 三维超声频谱多普勒的血流速度及其空间分布显示方法 |
CN103876780A (zh) * | 2014-03-03 | 2014-06-25 | 天津迈达医学科技股份有限公司 | 高频超声血流灰阶成像方法及装置 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109490896A (zh) * | 2018-11-15 | 2019-03-19 | 大连海事大学 | 一种极端环境三维图像采集处理系统 |
CN109490896B (zh) * | 2018-11-15 | 2023-05-05 | 大连海事大学 | 一种极端环境三维图像采集处理系统 |
US11453018B2 (en) | 2019-06-17 | 2022-09-27 | Ford Global Technologies, Llc | Sensor assembly with movable nozzle |
US20210033440A1 (en) * | 2019-07-29 | 2021-02-04 | Supersonic Imagine | Ultrasonic system for detecting fluid flow in an environment |
CN111544038A (zh) * | 2020-05-12 | 2020-08-18 | 上海深至信息科技有限公司 | 一种云平台超声成像系统 |
CN111544038B (zh) * | 2020-05-12 | 2024-02-02 | 上海深至信息科技有限公司 | 一种云平台超声成像系统 |
CN112712487A (zh) * | 2020-12-23 | 2021-04-27 | 北京软通智慧城市科技有限公司 | 一种场景视频融合方法、系统、电子设备及存储介质 |
CN117770870A (zh) * | 2024-02-26 | 2024-03-29 | 之江实验室 | 一种基于双线阵超声波场分离的超声成像方法及装置 |
CN117770870B (zh) * | 2024-02-26 | 2024-05-10 | 之江实验室 | 一种基于双线阵超声波场分离的超声成像方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN107847214B (zh) | 2021-01-01 |
CN112704516A (zh) | 2021-04-27 |
CN107847214A (zh) | 2018-03-27 |
CN112704516B (zh) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016192114A1 (zh) | 超声流体成像方法及超声流体成像系统 | |
WO2017020256A1 (zh) | 三维超声流体成像方法及系统 | |
JP6147489B2 (ja) | 超音波画像形成システム | |
CN106102587B (zh) | 超声血流成像显示方法及超声成像系统 | |
JP2023098929A (ja) | 3d環境からデータをレンダリングするためのシステムおよび方法 | |
KR101840405B1 (ko) | 광시야각 디스플레이들 및 사용자 인터페이스들 | |
WO2015098807A1 (ja) | 被写体と3次元仮想空間をリアルタイムに合成する撮影システム | |
KR102680570B1 (ko) | 이전의 관점으로부터의 렌더링된 콘텐츠 및 비-렌더링된 콘텐츠를 사용하는 새로운 프레임의 생성 | |
JP2012252697A (ja) | ボリューム・レンダリングした画像内の3dカーソルの深さを示すための方法及びシステム | |
CN108475180A (zh) | 在多个显示区域之间分布视频 | |
US20060126927A1 (en) | Horizontal perspective representation | |
US9224240B2 (en) | Depth-based information layering in medical diagnostic ultrasound | |
Soile et al. | Accurate 3D textured models of vessels for the improvement of the educational tools of a museum | |
KR20140035747A (ko) | 초음파 영상 장치 및 그 제어방법 | |
EP2962290B1 (en) | Relaying 3d information by depth simulation using 2d pixel displacement | |
Vasudevan et al. | A methodology for remote virtual interaction in teleimmersive environments | |
Barabas | Holographic television: measuring visual performance with holographic and other 3D television technologies | |
Roganov et al. | 3D systems that imitate visually observable objects to train a person's ability to visually determine distance to a selected object | |
Schmidt | Blended Spaces: Perception and Interaction in Projection-Based Spatial Augmented Reality Environments | |
Hassaine | Efficient rendering for three-dimensional displays |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15900031 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 10.04.2018) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15900031 Country of ref document: EP Kind code of ref document: A1 |