WO2021253293A1 - 超声造影成像方法、超声成像装置和存储介质 - Google Patents

超声造影成像方法、超声成像装置和存储介质 Download PDF

Info

Publication number
WO2021253293A1
WO2021253293A1 PCT/CN2020/096627 CN2020096627W WO2021253293A1 WO 2021253293 A1 WO2021253293 A1 WO 2021253293A1 CN 2020096627 W CN2020096627 W CN 2020096627W WO 2021253293 A1 WO2021253293 A1 WO 2021253293A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
rendering
tissue
rendering image
contrast
Prior art date
Application number
PCT/CN2020/096627
Other languages
English (en)
French (fr)
Inventor
王艾俊
林穆清
邹耀贤
桑茂栋
何绪金
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to CN202080001014.9A priority Critical patent/CN111836584B/zh
Priority to PCT/CN2020/096627 priority patent/WO2021253293A1/zh
Priority to CN202410325546.8A priority patent/CN118285839A/zh
Publication of WO2021253293A1 publication Critical patent/WO2021253293A1/zh
Priority to US18/081,300 priority patent/US20230210501A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/481Diagnostic techniques involving the use of contrast agent, e.g. microbubbles introduced into the bloodstream
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/466Displaying means of special interest adapted to display 3D data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5246Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/06Measuring blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image

Definitions

  • This application relates to the technical field of ultrasound imaging, and more specifically to an ultrasound contrast imaging method, ultrasound imaging device and storage medium.
  • Ultrasound instruments are generally used by doctors to observe the internal tissue structure of the human body.
  • the doctor places the operating probe on the surface of the skin corresponding to the part of the human body to obtain an ultrasound image of that part.
  • Ultrasound has become the main auxiliary method for doctors in diagnosis due to its characteristics of safety, convenience, non-destructiveness, and low cost.
  • Ultrasound contrast agent is a substance used to enhance image contrast in ultrasound imaging. It is generally micron-sized enveloped microbubbles with a strong acoustic impedance. The microbubbles are injected into the blood circulatory system through intravenous injection to enhance the reflection of ultrasound. Intensity, so as to achieve the purpose of ultrasound contrast imaging. Compared with conventional ultrasound imaging, it can significantly improve the detection of the microcirculation perfusion level of the diseased tissue. Compared with other inspection methods such as Computed Tomography (CT) and Magnetic Resonance Imaging (hereinafter referred to as CT) MRI), etc., ultrasound contrast agents have the advantages of simplicity, short time, real-time, non-invasive and non-radiation, etc., and have become a very important technology in ultrasound diagnosis.
  • CT Computed Tomography
  • MRI Magnetic Resonance Imaging
  • Three-dimensional imaging is to process the continuously collected dynamic two-dimensional section imaging data through a series of computer processing, and arrange them in a certain order to reconstitute the three-dimensional data, and then use three-dimensional rendering technology (surface rendering, volume rendering, etc.) to restore the tissues and organs.
  • the three-dimensional structure information helps doctors make more detailed clinical diagnosis.
  • Medical ultrasound three-dimensional contrast imaging technology has been widely used in the inspection of thyroid (nodule detection), breast, liver (sclerosis, nodule, tumor), fallopian tube (blocked) and other fields.
  • the present application provides a contrast-enhanced ultrasound imaging solution, which can help users more intuitively understand and observe the spatial position relationship of the contrast agent in the tissue, and obtain more clinical information.
  • the contrast-enhanced ultrasound imaging solution proposed by the present application will be briefly described below, and more details will be described in the specific implementation in conjunction with the accompanying drawings.
  • a contrast-enhanced ultrasound imaging method includes: controlling an ultrasound probe to transmit ultrasound to a target tissue containing a contrast agent, receiving the echo of the ultrasound, and obtaining first contrast data in real time based on the echo of the ultrasound, and
  • the first tissue data, the first contrast data and the first tissue data are all volume data;
  • the second contrast data and the second tissue data are rendered in real time to obtain a mixed rendering image of the second contrast data and the second tissue data, where ,
  • the second angiography data includes all or part of the first angiography data, and the second tissue data includes all or part of the first tissue data; the mixed rendering image is displayed in real time.
  • an ultrasound imaging device which includes an ultrasound probe, a transmission/reception sequence controller, a processor, and a display, wherein: the transmission/reception sequence controller is used to control the ultrasound probe to a target containing a contrast agent
  • the tissue emits ultrasound, receives the echo of the ultrasound, and obtains the first contrast data and the first tissue data in real time based on the echo of the ultrasound.
  • Both the first contrast data and the first tissue data are volume data; the processor is used to compare the second contrast data Real-time rendering with the second tissue data to obtain a mixed rendering image of the second contrast data and the second tissue data, where the second contrast data includes all or part of the first contrast data, and the second tissue data includes the first tissue All or part of the data; the monitor is used to display the mixed rendering image in real time.
  • a storage medium stores a computer program, and the computer program executes the above-mentioned ultrasound contrast imaging method when it is running.
  • the ultrasound contrast imaging method, ultrasound imaging device, and storage medium simultaneously collect volumetric contrast data and volumetric tissue data, and perform fusion rendering of the two to obtain a hybrid rendering image, which can help users understand and observe the contrast agent more intuitively Real-time spatial position relationship within the organization, and obtain more clinical information.
  • Fig. 1 shows a schematic block diagram of an exemplary ultrasound imaging apparatus for implementing a contrast-enhanced ultrasound imaging method according to an embodiment of the present application.
  • Fig. 2 shows a schematic flowchart of a contrast-enhanced ultrasound imaging method according to an embodiment of the present application.
  • Fig. 3 shows a schematic flow chart of acquiring volumetric contrast data and volumetric tissue data in a contrast-enhanced ultrasound imaging method according to an embodiment of the present application.
  • Fig. 4 shows a schematic flow chart of an example of fusion rendering of volumetric contrast data and volumetric tissue data in a contrast-enhanced ultrasound imaging method according to an embodiment of the present application.
  • Fig. 5 shows a schematic flowchart of another example of fusion rendering of volumetric contrast data and volumetric tissue data in the ultrasound contrast imaging method according to an embodiment of the present application.
  • FIG. 6 shows an exemplary schematic diagram of a hybrid rendering image obtained by the ultrasound contrast imaging method according to an embodiment of the present application.
  • Fig. 7 shows a schematic block diagram of an ultrasound imaging apparatus according to an embodiment of the present application.
  • Fig. 8 shows a schematic block diagram of an ultrasound imaging apparatus according to another embodiment of the present application.
  • FIG. 1 is a schematic structural block diagram of an exemplary ultrasound imaging apparatus 10 for implementing the ultrasound contrast imaging method according to an embodiment of the present application.
  • the ultrasonic imaging apparatus 10 may include an ultrasonic probe 100, a transmission/reception selection switch 101, a transmission/reception sequence controller 102, a processor 103, a display 104 and a memory 105.
  • the transmitting/receiving sequence controller 102 can excite the ultrasonic probe 100 to transmit ultrasonic waves to a target object (object under test), and can also control the ultrasonic probe 100 to receive ultrasonic echoes returned from the target object, thereby obtaining ultrasonic echo signals/data.
  • the processor 103 processes the ultrasound echo signal/data to obtain tissue-related parameters and ultrasound images of the target object.
  • the ultrasound images obtained by the processor 103 may be stored in the memory 105, and these ultrasound images may be displayed on the display 104.
  • the display 104 of the aforementioned ultrasonic imaging device 10 may be a touch display screen, a liquid crystal display screen, etc., or may be an independent display device such as a liquid crystal display, a television, etc., independent of the ultrasonic imaging device 10, or It is the display screen on electronic devices such as mobile phones and tablet computers.
  • the memory 105 of the aforementioned ultrasonic imaging device 10 may be a flash memory card, a solid-state memory, a hard disk, or the like.
  • the embodiments of the present application also provide a computer-readable storage medium that stores multiple program instructions. After the multiple program instructions are invoked and executed by the processor 103, the ultrasound in the various embodiments of the present application can be executed. Part or all of the steps in the contrast imaging method or any combination of the steps.
  • the computer-readable storage medium may be the memory 105, which may be a non-volatile storage medium such as a flash memory card, a solid-state memory, a hard disk, or the like.
  • the processor 103 of the aforementioned ultrasonic imaging device 10 may be implemented by software, hardware, firmware, or a combination thereof, and may use a circuit, a single or multiple application specific integrated circuits (ASIC), a single or Multiple general-purpose integrated circuits, single or multiple microprocessors, single or multiple programmable logic devices, or a combination of the foregoing circuits or devices, or other suitable circuits or devices, so that the processor 103 can execute various embodiments Corresponding steps in the contrast-enhanced ultrasound imaging method.
  • ASIC application specific integrated circuits
  • the contrast-enhanced ultrasound imaging method of the present application will be described in detail below with reference to FIGS. 2 to 6, and the method may be executed by the aforementioned ultrasound imaging apparatus 10.
  • FIG. 2 shows a schematic flow chart of a method 200 for contrast-enhanced ultrasound imaging according to an embodiment of the present application. As shown in FIG. 2, the contrast-enhanced ultrasound imaging method 200 includes the following steps:
  • step S210 the ultrasound probe is controlled to transmit ultrasound to the target tissue containing the contrast agent, receive the echo of the ultrasound, and obtain the first contrast data and the first tissue data in real time based on the echo of the ultrasound. All are volume data.
  • the volume data mentioned in this application is data obtained by scanning with an ultrasonic volume probe, and it may be three-dimensional data or four-dimensional data.
  • the ultrasonic volume probe can be a convex array probe or a surface array probe, which is not limited here.
  • volumetric contrast data also referred to as contrast volume data
  • volumetric tissue data also Called organization data
  • acquiring volumetric contrast data and volumetric tissue data of the target tissue at the same time does not necessarily mean acquiring volumetric contrast data and volumetric tissue data of the target tissue at the same time, but means that volumetric contrast data can be obtained from the echo of ultrasound. It can also obtain volume tissue data.
  • FIG. 3 shows a schematic flow chart of acquiring volumetric contrast data and volumetric tissue data in a contrast-enhanced ultrasound imaging method according to an embodiment of the present application.
  • ultrasound volume (or area array) transducers probes
  • volumetric contrast data and volume tissue data can be acquired simultaneously according to different emission sequences.
  • the two-way body data can be used for target tissues containing contrast agents.
  • a contrast imaging sequence may be used as the emission sequence.
  • the used radiographic imaging emission sequence may include two or more emission pulses of different amplitudes and phases.
  • Contrast imaging emission sequences often use lower emission voltages when energizing the transducer to prevent damage to the contrast agent microbubbles and achieve real-time ultrasound contrast imaging.
  • the transducer sequentially transmits ultrasound pulses to the target tissue containing the contrast agent, and sequentially receives the reflected echo input into a receiving circuit (such as a beam combiner, etc.) to generate a corresponding received echo sequence (for example, the received echo 1 shown in Figure 3). , Receive echo 2,..., Receive echo N, where N is a natural number).
  • the tissue signal and the contrast signal can be detected and extracted respectively, and the corresponding image data can be generated and stored, so that the volumetric contrast data and the volumetric tissue data can be obtained at the same time.
  • the volumetric contrast data obtained in step S210 is referred to as the first contrast data to distinguish it from the second contrast data described below, and there is no other limiting meaning.
  • the relationship between the two It will be described below.
  • the volume tissue data obtained in step S210 is referred to as the first tissue data in order to distinguish it from the second tissue data described below, and there is no other limiting meaning.
  • the relationship between the parties will be described below.
  • volumetric contrast data and volumetric tissue data can be realized, as described in the following steps.
  • step S220 the second contrast data and the second tissue data are rendered in real time to obtain a mixed rendering image of the second contrast data and the second tissue data, where the second contrast data includes all or part of the first contrast data ,
  • the second organization data includes all or part of the first organization data.
  • fusion rendering can be performed based on all the data of each of them (that is, the first angiography data and the first tissue data are combined and rendered).
  • the first tissue data is rendered in real time to obtain a mixed rendering image of the first angiography data and the first tissue data, and the mixed rendering image is displayed in step S230 described below), or based on both of them Part of the data of each of them is fused and rendered, and it can also be fused and rendered based on part of the data of one of them and all the data of the other to obtain a mixed rendering image.
  • the partial data of any one of the first contrast data and the first tissue data may include data corresponding to the region of interest.
  • the data rendered in real time in step S220 is referred to as second contrast data and second tissue data, where the second contrast data includes all or part of the first contrast data, and the second The organization data includes all or part of the first organization data.
  • the aforementioned partial data may include data corresponding to the region of interest.
  • the second contrast data may include data of the region of interest of the first contrast data. Based on this, data corresponding to the region of interest can be extracted from the first contrast data as the second contrast data.
  • the second organization data may include data of the region of interest of the first organization data, and based on this, data corresponding to the region of interest may be extracted from the first organization data as the second organization data.
  • the data acquisition methods of the respective regions of interest may include, but are not limited to, any of the following methods (1) to (7) One item or any combination of them:
  • (1) Construct a solid model, set the area of interest by adjusting the size of the solid model, and then obtain the tissue in the area of interest, and then obtain the tissue data or imaging data in the area of interest, where the solid model can be of different shapes
  • the model for example, a rectangular parallelepiped, an ellipsoid, a parabola, or any shape model with a smooth outer surface, may be a combination of one or more types of models.
  • PCA principal component analysis
  • LDA linear discriminant Analysis
  • Harr Harr
  • feature extraction methods for feature extraction or deep neural network for feature extraction
  • KNN K-Nearest Neighbor
  • SVM Support Vector Machine
  • Bounding-Box method based on deep learning detects and recognizes the region of interest, and then obtains the tissue data or imaging data in the region of interest, for example, by stacking the base convolutional layer and the fully connected layer to construct the The database performs feature learning and parameter regression.
  • the bounding box of the corresponding region of interest can be directly returned through the network, and the category of the organizational structure in the region of interest can be obtained at the same time, such as using a regional convolutional neural network (Region Convolutional Neural Networks, R-CNN for short), Fast R-CNN, Faster-RCNN, Single Shot MultiBox Detector , Referred to as SSD), unified framework real-time target detection (You Only Look Once, referred to as YOLO), etc., automatically obtain the organization in the region of interest through this method.
  • a regional convolutional neural network Regular Convolutional Neural Networks, R-CNN for short
  • Fast R-CNN Faster-RCNN
  • Single Shot MultiBox Detector Referred to as SSD
  • SSD Single Shot MultiBox Detector
  • SSD Single Shot MultiBox Detector
  • the end-to-end semantic segmentation network method based on deep learning detects and recognizes the region of interest, and then obtains the tissue data or imaging data in the region of interest.
  • This type of method is similar to the structure of the bounding box based on deep learning in the previous section, but has the difference It is to remove the fully connected layer and add an upsampling or deconvolution layer to make the input and output the same size, so as to directly obtain the region of interest of the input image and its corresponding category, such as using Full Convolutional Networks (Full Convolutional Networks, abbreviated as Full Convolutional Networks) These include FCN), U-Net, Mask R-CNN, etc.
  • FCN Full Convolutional Networks
  • U-Net User-Net
  • Mask R-CNN Mask R-CNN
  • the second contrast data and the second tissue data may be fused and rendered to obtain a hybrid rendering image.
  • rendering the second contrast data and the second tissue data to obtain a mixed rendering image of the second contrast data and the second tissue data may further include: comparing the second contrast data and the second tissue data Each data is rendered in real time, and the rendering results obtained after the respective renderings are merged to obtain a mixed rendering image; or the second angiography data and the second tissue data are simultaneously rendered in real time to obtain a mixed rendering image.
  • the fusion rendering of the volumetric contrast data and the volumetric tissue data may include rendering the two separately and then fusing and displaying them, or may include rendering the two together and then displaying them.
  • the two fusion rendering methods are described below with reference to FIG. 4 and FIG. 5 respectively.
  • Fig. 4 shows a schematic flowchart of an example of fusion rendering of volumetric contrast data and volumetric tissue data in the ultrasound contrast imaging method according to an embodiment of the present application.
  • the volumetric contrast data that is, the second contrast data in the previous article
  • the volumetric tissue data that is, the second tissue data in the previous article
  • the weight map is used as a basis for fusing the two rendering results
  • the two rendering results are fused according to the weight map to obtain a mixed rendering image, which is displayed to the user.
  • performing real-time rendering on the second angiography data and the second tissue data, and fusing the rendering results obtained after the respective renderings to obtain a mixed rendering image may further include: performing real-time rendering on the second angiography data to obtain the first A stereoscopic rendering image (where the first stereoscopic rendering image can be a two-dimensional image with a three-dimensional display effect), and obtaining the color value and spatial depth value of each pixel in the first stereoscopic rendering image; rendering the second tissue data in real time Obtain a second stereo rendering image (where the second stereo rendering image can be a two-dimensional image with a three-dimensional display effect), and obtaining the color value and spatial depth value of each pixel in the second stereo rendering image; based on the first stereo rendering image
  • the spatial depth value of each pixel in the second stereoscopic rendering image and the spatial depth value of the pixel at the corresponding position in the second stereoscopic rendering image determine that each pixel in the first stereoscopic rendering image and the pixel at the corresponding position
  • the rendering mode for real-time rendering of the second angiography data may be surface rendering or volume rendering.
  • the rendering mode for real-time rendering of the second tissue data may be surface rendering or volume rendering.
  • the main methods of surface rendering can include two types of methods: “based on fault contours (Delaunay)” and “extracting isosurfaces from voxels (MarchingCube)".
  • MarchingCube by extracting the isosurface (ie surface contour) information of the tissues/organs in the volume data-the normal vector and vertex coordinates of the triangular facets, a triangular mesh model is established, and then combined with the lighting model for three-dimensional rendering.
  • the lighting model includes ambient light, scattered light, highlights, etc., and different light source parameters (type, direction, position, angle) will affect the effect of the lighting model to varying degrees, and the volume rendering Volume Render (referred to as VR) map can be obtained.
  • VR volume rendering Volume Render
  • Volume rendering is mainly a ray tracing algorithm, which can include the following modes: surface imaging mode (Surface) that displays the surface information of the object, the maximum echo mode (Max) that displays the maximum value of the object's internal information, and the minimum echo mode ( Min), the X-ray mode (X-Ray) that displays the internal structure information of the object, the light and shadow imaging mode (Volume Rendering with Global Illumination) that displays the surface information of the object based on the global illumination model, and the contour mode that displays the internal and external contour information of the object through the translucent effect (Silhouette) and the temporal pseudo-color imaging mode that highlights the newly-added contrast data or tissue data on the surface of the object at different times (the newly-added contrast data or tissue data gives different pseudo-colors over time).
  • the appropriate volume rendering mode can be selected according to specific needs and/or user settings.
  • multiple rays of light passing through the angiography (tissue) volume data are emitted based on the direction of the line of sight, and each ray advances in a fixed step length, and the angiography (tissue) volume data on the light path is sampled according to each The gray value of each sampling point determines the opacity of each sampling point, and then accumulate the opacity of each sampling point on each light path to obtain the cumulative opacity, and finally through the cumulative opacity and color mapping table, each The cumulative opacity on the root light path is mapped to a color value, and then the color value is mapped to a pixel of the two-dimensional image. In this way, the color values of the pixels corresponding to all light paths are obtained, and the VR rendering image can be obtained. .
  • multiple rays of light passing through the contrast (tissue) volume data are emitted based on the direction of the line of sight, and each ray advances in a fixed step length, and the contrast (tissue) volume data on the light path is sampled according to The gray value of each sampling point determines the opacity of each sampling point.
  • the mapping table of opacity and color the opacity of each sampling point is mapped to a color value, and then each sampling point on each light path is mapped The color value of is accumulated to obtain the cumulative color value, and the cumulative color value is mapped to a pixel of the two-dimensional image. In this way, the color values of the pixels corresponding to all the light paths are obtained, and the VR rendering image can be obtained.
  • the above exemplarily shows the manner of real-time rendering of the second contrast data and the second tissue data.
  • the rendered image obtained by real-time rendering of the second angiography data is referred to as the first stereoscopic rendering image
  • the rendered image obtained by real-time rendering of the second tissue data is referred to as the second stereoscopic rendering image.
  • the first weight map can be determined first, and then the second weight map can be determined according to the first weight map, or the second weight map can be determined first, and then according to the first weight map.
  • the second weight map determines the first weight map.
  • the first weight map may be a map of the same size as the first stereo rendering map, and the value of each point in the map (generally the size is between 0 and 1) indicates that the first stereo rendering map is combined with the second stereo rendering map.
  • the weight value in the interval [0, 1] is only used as an exemplary description, and the value interval of the weight value is not limited in this application. Therefore, if the first weight map is expressed as Map, the second weight map can be expressed as 1-Map; similarly, if the first weight map is expressed as weight, the second weight map can be expressed as 1-weight. Due to the different principles of surface rendering and volume rendering, the weight map used in the fusion display is slightly different. The following is an example of first determining the first weight map. Since the first weight map is the weight value that should be used for each pixel of the first stereo rendering image during fusion display, the first stereo rendering image and the passing volume are obtained by drawing through the surface respectively. The two cases of drawing the first three-dimensional rendering are described.
  • the spatial depth value of each pixel in each of the first stereoscopic rendering and the second stereoscopic rendering can be obtained (For surface rendering, you can obtain the spatial depth information by obtaining the vertex coordinates of the triangular surface; for volume rendering, you can obtain the starting position of the first sample of the light path to the tissue/organ and the stop of the light Cut-off position to obtain spatial depth information) for calculating the first weight map.
  • the first weight map can be referred to as the first spatial position weight map here, and the second weight The graph is called the second spatial position weight graph. If the first spatial location weight map is represented as Map, the second spatial location weight map can be represented as 1-Map. The following describes the process of determining the first spatial position weight map Map and the fusion display of the first stereoscopic rendering image and the second stereoscopic rendering image based thereon.
  • the corresponding position of the pixel in the first stereo rendering image and the corresponding position in the second stereo rendering image can be determined according to the spatial depth value of each pixel in each of the first stereo rendering image and the second stereo rendering image.
  • the spatial depth value of the pixels in the first stereoscopic rendering image can be used as a reference standard to determine the effective spatial depth value interval for comparison with the spatial depth value of the pixel in the second stereo rendering image, and to determine the pixel in the first stereo rendering image and the pixel at the corresponding position in the second stereo rendering image based on the comparison result
  • the spatial position relationship between the respective corresponding data; or, the spatial depth value of the pixel in the second stereo rendering image can also be used as a reference standard to determine the validity for comparison with the spatial depth value of the pixel in the first stereo rendering image
  • the spatial depth value interval, and based on the comparison result, the spatial position relationship between the pixels in the first three-dimensional rendering image and the data corresponding to the pixels at the corresponding positions in the second three-dimensional rendering image is determined.
  • the spatial depth value of each pixel in the first stereoscopic rendering image and the second stereoscopic rendering image may include one or more spatial depth ranges, that is, each of the first stereoscopic rendering image and the second stereoscopic rendering image
  • the spatial depth value of each pixel includes a minimum value and a maximum value (where the minimum and maximum values can be the minimum and maximum values in the effective depth range of each pixel, for example, by setting the gray scale during volume rendering).
  • the minimum and maximum values in the effective depth range filtered by the degree threshold), so the minimum and maximum values of the spatial depth values of each pixel in the first stereo rendering image and the second stereo rendering image can be obtained for each pixel. Pixels are compared.
  • the following description takes the spatial depth value of the pixel in the second stereo rendering image as a reference standard as an example: For a pixel at any position in the first stereo rendering image and the second stereo rendering image, it is assumed that this The minimum value of the spatial depth value of the pixel at the position is Y1 and the maximum value is Y2. The minimum value of the spatial depth value of the pixel at this position in the first stereo rendering image is X1 and the maximum value is X2.
  • X1 is less than Or equal to Y1
  • the value at this position in the first spatial position weight map Map can be set to 1, that is, at this position Only the contrast signal is displayed at the location;
  • X2 is greater than or equal to Y2
  • the contrast volume data at this location is located at the back of the tissue volume data from the user’s perspective, and the value at that location in the first spatial location weight map Map can be is set to 0, only the tissue that is displayed at the position signals; if X 1 and X2 is greater than Y1 less than Y2, i.e.
  • the value of the position in the position weight map Map can be set to a value between 0 and 1, that is, the contrast signal and the tissue signal are displayed at a certain ratio at the position.
  • the specific ratio can be set according to user needs or other preset needs .
  • the weights at each pixel position in the first stereo rendering image and the second stereo rendering image can be set, so as to obtain the first spatial position weight map Map.
  • the spatial depth value of the second stereoscopic rendering image is used as a reference standard for exemplary description, and the spatial depth value of the first stereoscopic rendering image can also be considered as a reference standard, which is not limited in this application.
  • the sum of the weight values is taken as an example for description, and this application does not limit the value range of the weight.
  • the fusion display of the first stereoscopic rendering image and the second stereoscopic rendering image can be performed, and the third stereoscopic rendering image ( That is, the calculation formula (fusion method) of the color value of each pixel in the mixed rendering image can be expressed as:
  • Color Total Color C ⁇ Map+Color B ⁇ (1-Map)
  • Color Total is the color value after fusion
  • Color C is the color value of the pixel in the first three-dimensional rendering image (contrast image)
  • Color B is the color value of the pixel in the second three-dimensional rendering image (organization chart)
  • Map is the first A map of spatial location weights.
  • the spatial depth value of each pixel in each of the first stereo rendering image and the second stereo rendering image can be obtained
  • the cumulative opacity value of each pixel in the first stereo rendering image to be used to calculate the first weight map. Since the calculation of the first weight map is based on the spatial depth value of each pixel in the first stereo rendering image and the second stereo rendering image and based on the cumulative opacity value of each pixel in the first stereo rendering image, the first stereo rendering image can be calculated here.
  • a weight map is expressed as weight, the second weight map is expressed as 1-weight, and the value of each point in the first weight map weight is equal to the value of each point in the aforementioned first spatial position weight map multiplied by the first stereo rendering image
  • the cumulative opacity value of the pixel at this position in, that is, weight Map*Opacity c .
  • the fusion display of the first three-dimensional rendering image and the second three-dimensional rendering image can be performed.
  • the calculation formula (fusion method) of the color value of each pixel can be expressed as:
  • Color Total Color C ⁇ weight+Color B ⁇ (1-weight)
  • Color Total is the color value after fusion
  • Color C is the color value of the pixel in the first three-dimensional rendering image (contrast image)
  • Color B is the color value of the pixel in the second three-dimensional rendering image (organization chart)
  • weight is the first A weight map
  • Map is the first spatial position weight map
  • Opacity C is the cumulative opacity value of the pixels in the first stereo rendering image.
  • FIG. 5 shows a schematic flowchart of another example of fusion rendering of volumetric contrast data and volumetric tissue data in the ultrasound contrast imaging method according to an embodiment of the present application.
  • the volumetric angiography data ie, the second angiography data in the foregoing
  • volume tissue data ie, the second tissue data in the foregoing
  • performing real-time rendering on the second angiography data and the second tissue data at the same time to obtain a hybrid rendering image may further include: performing volume rendering on the second angiography data and the second tissue data at the same time.
  • the color value of each sampling point is obtained based on the spatial depth value and gray value of each sampling point on each ray path, and the color value of each ray path is determined based on the color value of all sampling points on each ray path Cumulative color value;
  • the color value of each pixel in the third stereo rendering image is determined based on the cumulative color value on each ray path, and the cumulative color value is mapped to the third stereo rendering image to obtain a mixed rendering image.
  • obtaining the color value of each sampling point based on the spatial depth value and gray value of each sampling point on each ray path may include: according to a preset three-dimensional color index table, based on each sampling point on each ray path Obtain the color value of each sampling point from the spatial depth value and gray value of the three-dimensional color index table.
  • the three-dimensional variables in the three-dimensional color index table are the contrast gray value, the tissue gray value and the spatial depth value respectively, and the three-dimensional variable corresponds to a color value; or
  • a predetermined mapping function the color value of each sampling point is obtained based on the spatial depth value and gray value of each sampling point on each ray path.
  • the predetermined mapping function includes three variables, namely the contrast gray value and the tissue gray value.
  • the value of the degree and the value of the spatial depth, and the function result of the predetermined mapping function is the color value.
  • the ray tracing algorithm is used to emit multiple rays of light passing through the contrast volume data and tissue volume data based on the direction of the line of sight.
  • the tissue volume data is sampled to obtain the gray value of the contrast volume data and/or the gray value of the tissue volume data at each sampling point, and then combine the current light step depth information to index the three-dimensional color table to obtain the color value or according to the preset
  • the mapping function obtains the color value, thereby obtaining the color value of each sampling point, and then accumulating the color value of each sampling point on each ray path, and mapping the accumulated color value to a pixel of the two-dimensional image, through In this way, the color values of the pixels corresponding to all the light paths are obtained, and then the VR rendering image can be obtained, and the final mixed rendering image can be obtained.
  • the second angiography data and the second tissue data are rendered at the same time to obtain a mixed rendering image, which can be expressed by a formula:
  • Color ray 3DColorTexture(value C , value B , depth)
  • Color ray is the color value of the current sampling point
  • value C is the contrast gray value of the current sampling point
  • value C is the tissue gray value of the current sampling point
  • depth is the light depth information of the current sampling point
  • 3DColorTexture() is Three-dimensional color index table or predetermined mapping function
  • Color Total is the cumulative color value of each sampling point on the current ray path
  • start represents the first sampling point on the current ray path
  • end represents the last sampling point on the current ray path.
  • step S230 the hybrid rendering image is displayed in real time.
  • the hybrid rendering image includes at least a part of the rendering image obtained by performing real-time rendering on the second angiography data, and at least a part of the rendering image obtained by performing real-time rendering on the second tissue data.
  • this application can realize real-time imaging of ultrasound volumetric contrast and volume tissue mixing, that is, collecting volume data of tissue and contrast in real time, and displaying the mixed image of tissue and contrast after real-time rendering.
  • its imaging frame rate is above 0.8VPS (Volume Per Seconds).
  • this application can greatly reduce the time-consuming imaging process.
  • the above-mentioned second angiography data and second tissue data are both volume data (that is, three-dimensional or four-dimensional data). Therefore, based on the foregoing steps S210 to S220, one frame of mixed rendering image or multiple frames of mixed rendering can be obtained image.
  • the multi-frame mixed rendered image when a multi-frame mixed rendered image is obtained, the multi-frame mixed rendered image may be dynamically displayed in a multi-frame, for example, the multi-frame mixed rendered image is dynamically displayed in a time sequence.
  • the part representing the contrast data or the part representing the tissue data may be displayed with different image characteristics (for example, different colors).
  • the part representing the contrast data in the hybrid rendering image is displayed in yellow, and the part representing the tissue data in the hybrid rendering image is displayed in gray.
  • the user can observe the real-time change process of the spatial position relationship between the contrast agent and the tissue.
  • the above-mentioned target tissue may include the fallopian tube region, and further, feature extraction may be performed on the mixed rendering image, and the analysis result of the fallopian tube region of the target object may be output based on the result of the feature extraction.
  • the analysis result of the fallopian tube presented in the hybrid rendering image may be obtained based on the features extracted from the hybrid rendering image, so as to provide a basis for the diagnosis of the fallopian tube of the target object.
  • feature extraction can be performed on each frame of mixed rendering image and the analysis result of the oviduct region corresponding to each frame of mixed rendering image can be output, or one of the frames can be output by combining the feature extraction results of multiple frames of mixed rendering image
  • the analysis result of the fallopian tube area corresponding to the mixed rendering image (such as combining the feature extraction result of the N frames of the mixed rendering image to output only the last frame, that is, the Nth frame of the analysis result of the fallopian tube area corresponding to the mixed rendering image, where N is a natural number greater than 1).
  • the feature extraction of each frame of the mixed rendering image can be performed based on the image processing algorithm, such as principal component analysis (Principal Components Analysis, PCA for short) and Linear Discriminant Analysis (LDA for short) ), Harr (Harr) feature, texture feature and other algorithms.
  • PCA Principal Components Analysis
  • LDA Linear Discriminant Analysis
  • Harr Harr
  • texture feature and other algorithms.
  • AlexNet AlexNet
  • VGG ResNet
  • MobileNet DenseNet
  • EfficientNet EfficientDet
  • outputting the analysis result of the fallopian tube region based on the result of the feature extraction may include: matching the result of the feature extraction with the features stored in the database, using a discriminator for classification, and outputting the classification result as The results of the analysis of the fallopian tube area.
  • the discriminator may include, but is not limited to, K-Nearest Neighbor (K-Nearest Neighbor, KNN for short), Support Vector Machines (SVM for short), random forest, neural network, and the like.
  • the analysis result of the fallopian tube area may include at least one relevant attribute of the fallopian tube of the target object.
  • the relevant attributes may include the patency attribute, the shape attribute, the attribute of whether the umbrella end has accumulated water, and the attribute of whether there is a cyst.
  • the patency attributes can include: normal, opaque, blocked, missing, etc.; shape attributes can include distortion, too long, too short, and so on.
  • the analysis result of the fallopian tube region may also include the determined probability value of the related attribute, such as the probability value of the fallopian tube being blocked, the probability value of the fallopian tube being twisted, and so on.
  • the numerical range of the probability value of each related attribute may be 0 to 100%.
  • the corresponding analysis results can be output by extracting the features of each frame of the mixed rendering image and classifying, that is, at least one of the above-mentioned related attributes of the oviduct of the target object determined based on one or several frames of the mixed rendering image and The probability value of each related attribute.
  • the analysis result of the fallopian tube area may also be the scoring result of the fallopian tube of the target object, and the scoring result may be determined based on the output of each relevant attribute and the probability value of each relevant attribute.
  • the score may be 100 normal.
  • the patency attribute of the fallopian tube of the target object is determined to be obstruction through feature extraction and discriminator classification, and the probability is 100%, the score may be 100 obstruction.
  • a comprehensive score can also be determined by the respective probability values of multiple related attributes.
  • the corresponding fallopian tube analysis result can be marked on at least one frame of the mixed rendered image, and the marked mixed rendered image can be displayed to the user, for example, a mixed rendered image of a normal fallopian tube is displayed.
  • the scoring result-normal 100; for example, a mixed rendering image showing blocked fallopian tubes, and the marked scoring result-blocking: 100.
  • the hybrid rendering image marked with the analysis result of the fallopian tube is displayed to the user (such as a doctor). In the hybrid rendering image, both the contrast area and the tissue area can be seen, so that the user can intuitively understand and observe the contrast.
  • the spatial position relationship and flow of the agent in the tissue, and the annotation results of the mixed rendered image enable users to intuitively understand the automatic analysis results of the fallopian tube of the target object, provide a reference for the doctor's diagnosis, and help the doctor to further improve the diagnosis efficiency.
  • the mixed rendering image and the fallopian tube analysis result can also be displayed separately.
  • pseudo-color display can also be performed on the basis of the above-mentioned multi-frame dynamic display.
  • the new displayable contrast data located in the front of the tissue data in the current frame of the mixed rendered image relative to the previous frame of mixed rendered image can be displayed in a different color from the previous one to show that the contrast data has recently appeared in the tissue The location in the data.
  • the part of the mixed rendering image that represents the contrast data is displayed in yellow.
  • a color different from yellow, such as blue may be displayed to represent the newly-added part that can display the contrast data. .
  • the user can not only observe the real-time change process of the spatial position relationship between the contrast agent and the tissue, but also observe the flow of the contrast agent in the tissue.
  • a user instruction may be received to adjust the display condition of the mixed rendered image of the current frame according to the user instruction. For example, if the user desires to display all tissue data, or all contrast data, or display tissue data and contrast data with the desired transparency in the current frame of the mixed rendering image, the user can adjust the aforementioned fusion display for the current frame according to user instructions.
  • the weights in the weight map can be used to obtain the display effect desired by the user. This embodiment can realize that the current frame hybrid rendering image can be adjusted by the user, thereby realizing more flexible volumetric imaging and tissue hybrid imaging.
  • FIG. 6 shows an exemplary schematic diagram of a hybrid rendering image obtained by the ultrasound contrast imaging method according to an embodiment of the present application.
  • the hybrid rendering image can see both the contrast area and the tissue area, which can help users more intuitively understand and observe the real-time spatial position relationship of the contrast agent in the tissue, and obtain more information.
  • Clinical information can be provided.
  • the ultrasound contrast imaging method simultaneously collects volumetric contrast data and volumetric tissue data, and fusion renders the two to obtain a hybrid rendering image, which can help users more intuitively understand and observe the contrast agent in the tissue The real-time spatial position relationship within, and obtain more clinical information.
  • FIG. 7 shows a schematic block diagram of an ultrasound imaging apparatus 700 according to an embodiment of the present application.
  • the ultrasound imaging apparatus 700 may include a transmission/reception sequence controller 710, an ultrasound probe 720, a processor 730, and a display 740.
  • the transmitting/receiving sequence controller 710 is used to control the ultrasound probe 720 to transmit ultrasound to the target tissue containing the contrast agent, receive the echo of the ultrasound, and obtain the first contrast data and the first tissue data in real time based on the echo of the ultrasound.
  • Both the first angiography data and the first tissue data are volume data.
  • the processor 730 is configured to perform real-time rendering on the second contrast data and the second tissue data to obtain a mixed rendering image of the second contrast data and the second tissue data, where the second contrast data includes all or part of the first contrast data Data, the second organization data includes all or part of the first organization data; the display 740 is used to display the mixed rendering image in real time.
  • part of the data includes data corresponding to the region of interest
  • the processor 730 may also be used to: extract the data corresponding to the region of interest from the first angiography data to serve as the second angiography data; and/ Or, extract the data corresponding to the region of interest from the first organization data as the second organization data.
  • the processor 730 performs real-time rendering of the second contrast data and the second tissue data to obtain a mixed rendered image of the second contrast data and the second tissue data, which may further include:
  • the data and the second tissue data are each rendered in real time, and the rendering results obtained after the respective renderings are merged to obtain a mixed rendering image; or the second imaging data and the second tissue data are simultaneously rendered in real time to obtain a mixed rendering image .
  • the processor 730 performs real-time rendering of the second angiography data and the second tissue data respectively, and merges the rendering results obtained after the respective renderings to obtain a mixed rendering image, which may further include: Perform real-time rendering of the second angiography data to obtain the first three-dimensional rendering image, and obtain the color value and spatial depth value of each pixel in the first three-dimensional rendering image; perform real-time rendering of the second tissue data to obtain the second three-dimensional rendering image, and obtain The color value and spatial depth value of each pixel in the first stereo rendering image; determine the first stereo based on the spatial depth value of each pixel in the first stereo rendering image and the spatial depth value of the pixel at the corresponding position in the second stereo rendering image The weight of each pixel in the rendering image and the pixel at the corresponding position in the second stereo rendering image when the color value is fused; based on the color value of each pixel in the first stereo rendering image and the pixel at the corresponding position in the second stereo rendering image During fusion, the respective
  • the rendering mode of the processor 730 for real-time rendering of the second angiography data and the second tissue data may both be surface rendering.
  • the rendering mode of the processor 730 for real-time rendering of the second angiography data and/or the second tissue data may be volume rendering, and the processor 730 determines that each pixel in the first stereo rendering image is
  • the respective weights of the pixels at the corresponding positions in the second stereo rendering image when the color values are fused may also be based on the cumulative opacity of each pixel in the first stereo rendering image and/or the cumulative opacity of each pixel in the second stereo rendering image .
  • the processor 730 performs real-time rendering on the second angiography data and the second tissue data at the same time to obtain a hybrid rendering image, which may further include: performing simultaneous rendering on the second angiography data and the second tissue data. Rendering, obtaining the spatial depth value and gray value of each sampling point on each ray path in the volume rendering process, where the gray value of each sampling point includes the gray value and/or the gray value of the second contrast data at that point The gray value of the second tissue data at this point; the color value of each sampling point is obtained based on the spatial depth value and gray value of each sampling point on each ray path, and the color value of each sampling point is based on the value of all sampling points on each ray path.
  • the color value determines the cumulative color value on each ray path; the color value of each pixel in the third stereo rendering image is determined based on the cumulative color value on each ray path, and the cumulative color value is mapped to the third stereo rendering image To get a mixed rendering image.
  • the processor 730 obtains the color value of each sampling point based on the spatial depth value and gray value of each sampling point on each ray path, which may include: according to a preset three-dimensional color index table, The color value of each sampling point is obtained based on the spatial depth value and gray value of each sampling point on each ray path.
  • the three-dimensional variables in the three-dimensional color index table are the contrast gray value, tissue gray value and spatial depth value.
  • the three-dimensional variable corresponds to a color value; or according to a predetermined mapping function, the color value of each sampling point is obtained based on the spatial depth value and gray value of each sampling point on each ray path.
  • the predetermined mapping function includes three variables, They are the contrast gray value, tissue gray value and spatial depth value, and the function result of the predetermined mapping function is the color value.
  • the processor 730 may extract data corresponding to the region of interest based on a deep learning device.
  • the ultrasound probe 720 acquiring the first contrast data and the first tissue data based on the echo of the ultrasound may further include: acquiring the first contrast signal and the first tissue signal based on the echo of the ultrasound; A contrast signal obtains the first contrast data in real time, and the first tissue data is obtained in real time based on the first tissue signal.
  • the ultrasound imaging apparatus 700 can be used to execute the ultrasound contrast imaging method 200 according to the embodiment of the present application described above.
  • Those skilled in the art can understand the structure and operation of the ultrasound imaging apparatus 700 in combination with the foregoing description. , For the sake of brevity, some details of the above will not be repeated here.
  • the ultrasound imaging device simultaneously collects volumetric contrast data and volumetric tissue data, and performs fusion rendering of the two to obtain a hybrid rendering image, which can help the user to more intuitively understand and observe the contrast agent in the tissue
  • the real-time spatial position relationship of the system and obtain more clinical information.
  • FIG. 8 shows a schematic block diagram of an ultrasound imaging apparatus 800 according to an embodiment of the present application.
  • the ultrasound imaging device 800 includes a memory 810 and a processor 820.
  • the memory 810 stores a program for implementing corresponding steps in the ultrasound contrast imaging method 200 according to an embodiment of the present application.
  • the processor 820 is configured to run a program stored in the memory 810 to execute the corresponding steps of the ultrasound contrast imaging method 200 according to the embodiment of the present application.
  • a contrast-enhanced ultrasound imaging method includes: controlling an ultrasound probe to transmit ultrasound to a target tissue containing a contrast agent, receiving the echo of the ultrasound, and obtaining the first ultrasound in real time based on the echo of the ultrasound.
  • the imaging data and the first tissue data, the first imaging data and the first tissue data are volume data; real-time rendering of the first imaging data to obtain the first stereo rendering, and real-time rendering of the first tissue data to obtain the second stereo rendering Figure; displays the first three-dimensional rendering image and the second three-dimensional rendering image at the same time.
  • volumetric contrast data and volumetric tissue data are obtained from the echo of ultrasound, and their respective stereoscopic renderings are displayed in the same interface at the same time after real-time rendering, which can also help the user observe the contrast agent in the tissue The real-time spatial position relationship within, and obtain more clinical information.
  • an ultrasound imaging device which can be used to implement the above-mentioned ultrasound contrast imaging method.
  • the ultrasound imaging device may include an ultrasound probe, a transmitting/receiving sequence controller, a processor, and a display, wherein: the transmitting/receiving sequence controller is used to control the ultrasound probe to transmit ultrasonic waves to the target tissue containing the contrast agent, and receive the echo of the ultrasonic waves.
  • the first contrast data and the first tissue data are acquired in real time based on the echo of ultrasound.
  • the first contrast data and the first tissue data are both volume data; the processor is used to perform real-time rendering of the first contrast data to obtain the first stereo rendering Figure, real-time rendering of the first tissue data to obtain the second stereoscopic rendering; the display is used to simultaneously and real-time display the first stereoscopic rendering and the second stereoscopic rendering.
  • a storage medium is also provided, and program instructions are stored on the storage medium, and the program instructions are used to execute the corresponding steps of the ultrasound contrast imaging method of the embodiment of the present application when the program instructions are executed by a computer or a processor.
  • the storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disk read-only memory (CD -ROM), USB memory, or any combination of the above storage media.
  • the computer-readable storage medium may be any combination of one or more computer-readable storage media.
  • a computer program is also provided, and the computer program can be stored in a cloud or a local storage medium.
  • the computer program is run by a computer or a processor, it is used to execute the corresponding steps of the ultrasound contrast imaging method of the embodiment of the present application.
  • the ultrasound contrast imaging method, ultrasound imaging device, and storage medium simultaneously collect volumetric contrast data and volume tissue data, and perform fusion rendering of the two to obtain a hybrid rendering image, which can help users more intuitively Understand and observe the real-time spatial position relationship of the contrast agent in the tissue, and obtain more clinical information.
  • the disclosed device and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another device, or some features can be ignored, or not implemented.
  • the various component embodiments of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some modules according to the embodiments of the present application.
  • This application can also be implemented as a device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for implementing the present application may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Hematology (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

一种超声造影成像方法、超声成像装置和存储介质,所述方法包括:控制超声探头向含有造影剂的目标组织发射超声波,接收所述超声波的回波,并基于所述超声波的回波实时获取第一造影数据和第一组织数据,第一造影数据和第一组织数据均为容积数据(S210);对第二造影数据和第二组织数据进行实时渲染,以得到第二造影数据和第二组织数据的混合渲染图像,其中,第二造影数据包括第一造影数据的全部或部分数据,第二组织数据包括第一组织数据的全部或部分数据(S220);实时显示混合渲染图像(S230)。所述超声造影成像方法和超声成像装置能够帮助用户更为直观地理解、观察造影剂在组织内实时的空间位置关系,以及获取更多的临床信息。

Description

超声造影成像方法、超声成像装置和存储介质
说明书
技术领域
本申请涉及超声成像技术领域,更具体地涉及一种超声造影成像方法、超声成像装置和存储介质。
背景技术
超声仪器一般用于医生观察人体的内部组织结构,医生将操作探头放在人体部位对应的皮肤表面,可以得到该部位的超声图像。超声由于其安全、方便、无损、廉价等特点,已经成为医生诊断的主要辅助手段。
超声造影剂是在超声成像中用来增强图像对比度的物质,一般为微米量级直径的包膜微气泡,微泡具有很强的声阻抗,通过静脉注射进入血液循环系统,以增强超声波的反射强度,从而达到超声造影成像的目的。与常规超声成像相比,可以显著提高病变组织在微循环灌注水平的检测,相比于其他检查方法如电子计算机断层扫描(Computed Tomography,简称为CT)、磁共振成像(Magnetic Resonance Imaging,简称为MRI)等,超声造影剂具备简便、耗时短、实时性、无创以及无辐射等优点,已成为超声诊断中一个十分重要的技术。
三维造影成像是将连续采集到的动态二维切面造影数据经过计算机的一系列处理,并按照一定顺序排列重新组成三维数据,再利用三维渲染技术(面绘制、体绘制等)还原出组织器官的立体结构信息,帮助医生做出更为详细的临床诊断。医学超声三维造影成像技术已被广泛应用到甲状腺(结节检测)、乳腺、肝脏(硬化、结节、肿瘤)、输卵管(堵塞)等领域的检查中。
目前大部分的超声三维造影成像只能单独显示三维造影图像或组织图像,但为了对相关病灶进行精准定位以及诊断,往往需要结合这两者的图像信息以及空间相对位置关系,为此用户往往需要在三维造影图像或组织图像这两者之间反复切换,这样不仅操作繁琐,且需要一定的空间想象 力才能确定二者的空间位置关系。
发明内容
本申请提供一种超声造影成像方案,其能够帮助用户更为直观地理解、观察造影剂在组织内的空间位置关系,以及获取更多的临床信息。下面简要描述本申请提出的超声造影成像方案,更多细节将在后续结合附图在具体实施方式中加以描述。
本申请一方面,提供了一种超声造影成像方法,该方法包括:控制超声探头向含有造影剂的目标组织发射超声波,接收超声波的回波,并基于超声波的回波实时获取第一造影数据和第一组织数据,第一造影数据和第一组织数据均为容积数据;对第二造影数据和第二组织数据进行实时渲染,以得到第二造影数据和第二组织数据的混合渲染图像,其中,第二造影数据包括第一造影数据的全部或部分数据,第二组织数据包括第一组织数据的全部或部分数据;实时显示混合渲染图像。
本申请另一方面,提供了一种超声成像装置,该装置包括超声探头、发射/接收序列控制器、处理器和显示器,其中:发射/接收序列控制器用于控制超声探头向含有造影剂的目标组织发射超声波,接收超声波的回波,并基于超声波的回波实时获取第一造影数据和第一组织数据,第一造影数据和第一组织数据均为容积数据;处理器用于对第二造影数据和第二组织数据进行实时渲染,以得到第二造影数据和第二组织数据的混合渲染图像,其中,第二造影数据包括第一造影数据的全部或部分数据,第二组织数据包括第一组织数据的全部或部分数据;显示器用于实时显示混合渲染图像。
本申请再一方面,提供了一种存储介质,该存储介质上存储有计算机程序,计算机程序在运行时执行上述超声造影成像方法。
根据本申请实施例的超声造影成像方法、超声成像装置和存储介质同时采集容积造影数据和容积组织数据,对二者进行融合渲染得到混合渲染图像,能够帮助用户更为直观地理解、观察造影剂在组织内实时的空间位置关系,以及获取更多的临床信息。
附图说明
图1示出用于实现根据本申请实施例的超声造影成像方法的示例性超声成像装置的示意性框图。
图2示出根据本申请实施例的超声造影成像方法的示意性流程图。
图3示出根据本申请实施例的超声造影成像方法中获取容积造影数据和容积组织数据的示意性流程框图。
图4示出根据本申请实施例的超声造影成像方法中对容积造影数据和容积组织数据进行融合渲染的一个示例的示意性流程框图。
图5示出根据本申请实施例的超声造影成像方法中对容积造影数据和容积组织数据进行融合渲染的另一个示例的示意性流程框图。
图6示出根据本申请实施例的超声造影成像方法得到的混合渲染图像的示例性示意图。
图7示出根据本申请一个实施例的超声成像装置的示意性框图。
图8示出根据本申请另一个实施例的超声成像装置的示意性框图。
具体实施方式
为了使得本申请的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。基于本申请中描述的本申请实施例,本领域技术人员在没有付出创造性劳动的情况下所得到的所有其他实施例都应落入本申请的保护范围之内。
在下文的描述中,给出了大量具体的细节以便提供对本申请更为彻底的理解。然而,对于本领域技术人员而言显而易见的是,本申请可以无需一个或多个这些细节而得以实施。在其他的例子中,为了避免与本申请发生混淆,对于本领域公知的一些技术特征未进行描述。
应当理解的是,本申请能够以不同形式实施,而不应当解释为局限于这里提出的实施例。相反地,提供这些实施例将使公开彻底和完全,并且将本申请的范围完全地传递给本领域技术人员。
在此使用的术语的目的仅在于描述具体实施例并且不作为本申请的限制。在此使用时,单数形式的“一”、“一个”和“所述/该”也意图包括 复数形式,除非上下文清楚指出另外的方式。还应明白术语“组成”和/或“包括”,当在该说明书中使用时,确定所述特征、整数、步骤、操作、元件和/或部件的存在,但不排除一个或更多其他的特征、整数、步骤、操作、元件、部件和/或组的存在或添加。在此使用时,术语“和/或”包括相关所列项目的任何及所有组合。
为了彻底理解本申请,将在下列的描述中提出详细的步骤以及详细的结构,以便阐释本申请提出的技术方案。本申请的较佳实施例详细描述如下,然而除了这些详细描述外,本申请还可以具有其他实施方式。
首先,参照图1来描述用于实现本申请实施例的超声造影成像方法的示例性超声成像装置。
图1为用于实现本申请实施例的超声造影成像方法的示例性超声成像装置10的结构框图示意图。如图1所示,该超声成像装置10可以包括超声探头100、发射/接收选择开关101、发射/接收序列控制器102、处理器103、显示器104和存储器105。发射/接收序列控制器102可以激励超声探头100向目标对象(被测对象)发射超声波,还可以控制超声探头100接收从目标对象返回的超声回波,从而获得超声回波信号/数据。处理器103对该超声回波信号/数据进行处理,以获得目标对象的组织相关参数和超声图像。处理器103获得的超声图像可以存储于存储器105中,这些超声图像可以在显示器104上显示。
本申请实施例中,前述的超声成像装置10的显示器104可为触摸显示屏、液晶显示屏等,也可以是独立于超声成像装置10之外的液晶显示器、电视机等独立显示装置,也可为手机、平板电脑等电子装置上的显示屏。
本申请实施例中,前述的超声成像装置10的存储器105可为闪存卡、固态存储器、硬盘等。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有多条程序指令,该多条程序指令被处理器103调用执行后,可执行本申请各个实施例中的超声造影成像方法中的部分步骤或全部步骤或其中步骤的任意组合。
一个实施例中,该计算机可读存储介质可为存储器105,其可以是闪存卡、固态存储器、硬盘等非易失性存储介质。
本申请实施例中,前述的超声成像装置10的处理器103可以通过软件、硬件、固件或者其组合实现,可以使用电路、单个或多个专用集成电路(application specific integrated circuits,ASIC)、单个或多个通用集成电路、单个或多个微处理器、单个或多个可编程逻辑器件、或者前述电路或器件的组合、或者其他适合的电路或器件,从而使得该处理器103可以执行各个实施例中的超声造影成像方法的相应步骤。
下面结合图2到图6对本申请的超声造影成像方法进行详细描述,该方法可由前述的超声成像装置10来执行。
图2示出了根据本申请一个实施例的超声造影成像方法200的示意性流程图。如图2所示,超声造影成像方法200包括如下步骤:
在步骤S210,控制超声探头向含有造影剂的目标组织发射超声波,接收超声波的回波,并基于超声波的回波实时获取第一造影数据和第一组织数据,第一造影数据和第一组织数据均为容积数据。
其中,本申请所说的容积数据是通过超声容积探头进行扫描得到的数据,可以是三维数据,也可以是四维数据。超声容积探头可以是凸阵探头,也可以是面阵探头,此处不做限定。
在本申请的实施例中,通过控制超声探头向含有造影剂的目标组织发射超声波,可以根据超声波的回波同时获取目标组织的容积造影数据(也称为造影体数据)和容积组织数据(也称为组织体数据)。此处,同时获取目标组织的容积造影数据和容积组织数据并非一定意味着在同一时间获取目标组织的容积造影数据和容积组织数据,而是指从超声波的回波中既能获取容积造影数据,又能获取容积组织数据。
下面参考图3来描述本申请实施例的超声造影成像方法中容积造影数据和容积组织数据的示例性获取过程。图3示出根据本申请实施例的超声造影成像方法中获取容积造影数据和容积组织数据的示意性流程框图。如图3所示的,对于含有造影剂的目标组织,可以利用超声容积(或面阵)换能器(探头)进行容积数据采集,根据不同的发射序列可同时获取容积造影数据和容积组织数据这两路体数据。
在本申请的实施例中,可以采用造影成像序列作为发射序列。示例性地,所采用的造影成像发射序列可以包括两个或更多个不同幅度和相位的 发射脉冲。造影成像发射序列激励换能器时往往使用较低的发射电压,以防止破坏造影剂微泡并实现实时超声造影成像。换能器向含有造影剂的目标组织依次发射超声脉冲,并依次接收反射回波输入接收电路(诸如波束合成器等),生成相应的接收回波序列(例如图3所示的接收回波1、接收回波2、……、接收回波N,其中N为自然数)。接着,可以根据相应的信号检测与处理方式,分别进行组织信号和造影信号的检测和提取,生成相应的图像数据并存储,即可同时获取容积造影数据和容积组织数据。
在本申请的实施例中,将在步骤S210所获取的容积造影数据称为第一造影数据是为了与下文中将描述的第二造影数据区分开来,没有其他限制意义,这两者的关系将在下文中描述。类似地,在本申请的实施例中,将在步骤S210所获取的容积组织数据称为第一组织数据是为了与下文中将描述的第二组织数据区分开来,没有其他限制意义,这两者的关系将在下文中描述。
现在返回参考图2,基于所获取的容积造影数据和容积组织数据,即能够实现容积造影数据与容积组织数据的混合成像,如下面的步骤将描述的。
在步骤S220,对第二造影数据和第二组织数据进行实时渲染,以得到第二造影数据和第二组织数据的混合渲染图像,其中,第二造影数据包括第一造影数据的全部或部分数据,第二组织数据包括第一组织数据的全部或部分数据。
在本申请的实施例中,对于在步骤S210中获取的第一造影数据和第一组织数据,在步骤S220可以基于它们两者各自的全部数据进行融合渲染(即对所述第一造影数据和所述第一组织数据进行实时渲染,以得到该第一造影数据和所述第一组织数据的混合渲染图像,并在下文将描述的步骤S230中显示该混合渲染图像),也可以基于它们两者各自的部分数据进行融合渲染,还可以基于它们中的一者的部分数据与另一者的全部数据进行融合渲染,以得到混合渲染图像。其中,第一造影数据和第一组织数据中的任一者的部分数据可以包括感兴趣区域对应的数据。为了使得描述更为清楚和简洁,将在步骤S220中进行实时渲染的数据称为第二造影数据和第二组织数据,其中,第二造影数据包括第一造影数据的全部或部分数据, 第二组织数据包括第一组织数据的全部或部分数据。
在本申请的实施例中,上述部分数据可包括感兴趣区域对应的数据。第二造影数据可以包括第一造影数据的感兴趣区域的数据,基于此,可以从第一造影数据中提取感兴趣区域对应的数据作为第二造影数据。类似地,第二组织数据可以包括第一组织数据的感兴趣区域的数据,基于此,可以从第一组织数据中提取感兴趣区域对应的数据作为第二组织数据。
在本申请的实施例中,无论是对于第一造影数据还是对于第一组织数据,其各自感兴趣区域部分的数据的获取方式可以包括但不限于以下方式(1)到(7)中的任一项或者它们的任意组合:
(1)构造一个实体模型,通过调节实体模型的尺寸来设置感兴趣区域,进而获取感兴趣区域内的组织,进而获取感兴趣区域内的组织数据或者造影数据,其中实体模型可以是不同形状的模型,例如,长方体、椭球体、抛物面或者具有光滑外表面的任何形状的模型,可以是其中一类或多类模型的组合。
(2)通过裁剪、擦除等方式去除不感兴趣的组织,进而获取感兴趣区域内的组织数据或者造影数据。
(3)交互式分割出感兴趣区域组织,如采用基于LiveWire算法的智能剪刀、图像分割算法(诸如GrabCut)等方法半自动分割出感兴趣区域组织,进而获取感兴趣区域内的组织数据或者造影数据。
(4)基于滑窗的方法获取感兴趣区域,进而获取感兴趣区域所对应的组织数据或者造影数据,例如:首先对滑窗内的区域进行特征提取(诸如采用主成分分析(Principal Component Analysis,简称为PCA)、线性判别分析(linear discriminant Analysis,简称LDA)、哈尔(Harr)特征、纹理特征等特征提取方法进行特征提取或采用深度神经网络来进行特征提取),然后将提取到的特征和数据库进行匹配,再采用诸如K近邻法(K-Nearest Neighbor,简称为KNN)、支持向量机(Support Vector Machine,简称为SVM)、随机森林、神经网络等的判别器进行分类,确定当前滑窗是否为感兴趣区域。
(5)基于深度学习的边界框(Bounding-Box)方法检测识别感兴趣区域,进而获取感兴趣区域内的组织数据或者造影数据,例如:通过堆叠基 层卷积层和全连接层来对构建的数据库进行特征的学习和参数的回归,对于一幅输入图像,可以通过网络直接回归出对应的感兴趣区域的边界框,同时获取其感兴趣区域内组织结构的类别,如采用区域卷积神经网络(Region Convolutional Neural Networks,简称为R-CNN)、快速区域卷积神经网络(Fast R-CNN)、更快区域卷积神经网络(Faster-RCNN)、单点多框探测器(Single Shot MultiBox Detector,简称为SSD)、统一框架的实时目标检测(You Only Look Once,简称为YOLO)等,通过该方法自动获取感兴趣区域内的组织。
(6)基于深度学习的端到端的语义分割网络方法检测识别感兴趣区域,进而获取感兴趣区域内的组织数据或者造影数据,该类方法与前文基于深度学习的边界框的结构类似,不同点在于将全连接层去除,加入上采样或者反卷积层来使得输入与输出的尺寸相同,从而直接得到输入图像的感兴趣区域及其相应类别,例如采用全卷积网络(Full Convolutional Networks,简称为FCN)、U网络(U-Net)、掩膜区域卷积神经网络(Mask R-CNN)等,通过该方法自动获取感兴趣区域内的组织。
(7)采用前述(2)、(3)、(4)、(5)或(6)中的方式来对目标进行定位,再根据定位结果额外设计分类器对目标进行分类判断,例如:首先对目标感兴趣区域或掩膜进行特征提取(诸如采用PCA、LDA、Harr特征、纹理特征等特征提取方法进行特征提取或采用深度神经网络来进行特征提取),再将提取到的特征和数据库进行匹配,再采用诸如KNN、SVM、随机森林、神经网络等的判别器进行分类,确定当前滑窗是否为感兴趣区域,通过该方法自动获取感兴趣区域内的组织,进而获取感兴趣区域内的组织数据或者造影数据。
在根据第一造影数据和第一组织数据分别获取第二造影数据和第二组织数据后,可以对第二造影数据和第二组织数据进行融合渲染,以得到混合渲染图像。在本申请的实施例中,对第二造影数据和第二组织数据进行渲染,以得到第二造影数据和第二组织数据的混合渲染图像,可以进一步包括:对第二造影数据和第二组织数据各自进行实时渲染,并将各自渲染后得到的渲染结果进行融合,以得到混合渲染图像;或者对第二造影数据和第二组织数据同时进行实时渲染,以得到混合渲染图像。也就是说, 在本申请中,对容积造影数据和容积组织数据的融合渲染可以包括将两者先彼此各自渲染再融合显示,还可以包括将两者一同渲染后显示。下面分别参照图4和图5对这两种融合渲染方式进行描述。
图4示出了根据本申请实施例的超声造影成像方法中对容积造影数据和容积组织数据进行融合渲染的一个示例的示意性流程框图。如图4所示,对容积造影数据(即前文中的第二造影数据)和容积组织数据(即前文中的第二组织数据)各自进行实时渲染,并根据各自渲染后得到的渲染结果进行权重图的计算,该权重图作为将两个渲染结果进行融合的依据,最终根据该权重图将两个渲染结果进行融合以得到混合渲染图像,并显示给用户。
具体地,对第二造影数据和第二组织数据各自进行实时渲染,并将各自渲染后得到的渲染结果进行融合,以得到混合渲染图像,可以进一步包括:对第二造影数据进行实时渲染得到第一立体渲染图(其中第一立体渲染图可以为具有三维显示效果的二维图像),并获取第一立体渲染图中每个像素的颜色值和空间深度值;对第二组织数据进行实时渲染得到第二立体渲染图(其中第二立体渲染图可以为具有三维显示效果的二维图像),并获取第二立体渲染图中每个像素的颜色值和空间深度值;基于第一立体渲染图中每个像素的空间深度值和第二立体渲染图中对应位置处像素的空间深度值确定第一立体渲染图中每个像素与第二立体渲染图中对应位置处像素在颜色值融合时各自的权重;基于第一立体渲染图中每个像素与第二立体渲染图中对应位置处像素在颜色值融合时各自的权重计算第三立体渲染图中每个像素的颜色值,并将所计算的颜色值映射到第三立体渲染图中,以得到混合渲染图像。下面详细描述上述过程。
在本申请的一个实施例中,对第二造影数据进行实时渲染的渲染模式可以为面绘制或体绘制,类似地,对第二组织数据进行实时渲染的渲染模式可以为面绘制或体绘制。
其中,面绘制的主要方法可以包括“基于断层轮廓线(Delaunay)”以及“体素中抽取等值面(MarchingCube)”两类方法。以MarchingCube为例,通过提取体数据中组织/器官的等值面(即表面轮廓)信息——三角面片的法向量以及顶点坐标,建立三角形网格模型,然后再结合光照模型进 行立体渲染,其中光照模型包括环境光、散射光、高光等,不同光源参数(类型、方向、位置、角度)会在不同程度上影响光照模型的效果,即可得到容积渲染Volume Render,简称为VR)图。
体绘制主要为光线追踪算法,可以包括以下模式:显示物体表面信息的表面成像模式(Surface)、显示物体内部最大值信息的最大回声模式(Max)、显示物体内部最小值信息的最小回声模式(Min)、显示物体内部结构信息的X光模式(X-Ray)、基于全局光照模型显示物体表面信息的光影成像模式(Volume Rendering with Global Illumination)、通过半透明效果显示物体内外轮廓信息的轮廓模式(Silhouette)以及凸显不同时刻物体表面新增的造影数据或组织数据(新增造影数据或组织数据随时间变化赋予不同伪彩)的时间伪彩成像模式。可以根据具体的需求和/或用户的设置选择合适的体绘制模式。
下面描述基于体绘制方式得到渲染图的两个示例。
在一个示例中,基于视线方向发射多根穿过造影(组织)体数据的光线,每一根光线按固定步长进行递进,对光线路径上的造影(组织)体数据进行采样,根据每个采样点的灰度值确定每个采样点的不透明度,再对每一根光线路径上各采样点的不透明度进行累积得到累积不透明度,最后通过累积不透明度与颜色的映射表将每一根光线路径上的累积不透明度映射为一个颜色值,再将该颜色值映射到二维图像的一个像素上,通过如此方式得到所有光线路径各自对应的像素的颜色值,即可得到VR渲染图。
在另一个示例中,基于视线方向发射多根穿过造影(组织)体数据的光线,每一根光线按固定步长进行递进,对光线路径上的造影(组织)体数据进行采样,根据每个采样点的灰度值确定每个采样点的不透明度,根据不透明度与颜色的映射表将每个采样点的不透明度映射为一个颜色值,再对每一根光线路径上各采样点的颜色值进行累积得到累积颜色值,并将该累积颜色值映射到二维图像的一个像素上,通过如此方式得到所有光线路径各自对应的像素的颜色值,即可得到VR渲染图。
以上示例性地示出了对第二造影数据和第二组织数据各自进行实时渲染的方式。为了彼此区分,将对第二造影数据进行实时渲染得到的渲染图称为第一立体渲染图,将对第二组织数据进行实时渲染得到的渲染图称 为第二立体渲染图。在将第一立体渲染图与第二立体渲染图进行融合显示时,可以首先确定第一权重图,再根据第一权重图确定第二权重图,也可以首先确定第二权重图,再根据第二权重图确定第一权重图。其中,第一权重图可以是与第一立体渲染图同样大小的图,该图中的每一点的值(一般大小在0到1之间)表示将第一立体渲染图与第二立体渲染图融合显示时第一立体渲染图中各像素的颜色值应该采用的权重值;类似地,第二权重图可以是与第二立体渲染图同样大小的图,该图中的每一点的值(一般大小在0到1之间)表示将第一立体渲染图与第二立体渲染图融合显示时第二立体渲染图中各像素的颜色值应该采用的权重值。可以理解,以权重值在[0,1]区间为例,第一权重图中任一点的值与第二权重图中相应位置处的点的值这两者的和应该为1。其中,权重值在[0,1]区间仅作为示例性说明,本申请对权重值的取值区间不做限定。因此,第一权重图如果表示为Map,则第二权重图则可表示为1-Map;类似地,第一权重图如果表示为weight,则第二权重图则可表示为1-weight。由于面绘制和体绘制原理的不同,在融合显示时采用的权重图稍有不同。下面以先确定第一权重图为例来描述,由于第一权重图是融合显示时第一立体渲染图各像素应采用的权重值,所以分别以通过面绘制得到第一立体渲染图和通过体绘制得到第一立体渲染图这两种情况进行描述。
对于通过面绘制得到的第一立体渲染图(第二立体渲染图通过面绘制或者体绘制而得到),可以获取第一立体渲染图和第二立体渲染图各自图中每个像素的空间深度值(对于面绘制而言,可以通过获取三角面片的顶点坐标来获取空间深度信息;对于体绘制而言,可以通过获取光线路径上首次采样到组织/器官的起始位置以及光线停止步进的截止位置来获取空间深度信息),以用于计算第一权重图。由于第一权重图的计算是基于第一立体渲染图和第二立体渲染图中各像素的空间深度信息,因此此处可将第一权重图称为第一空间位置权重图,将第二权重图称为第二空间位置权重图。如果将第一空间位置权重图表示为Map,则第二空间位置权重图可表示为1-Map。下面描述第一空间位置权重图Map的确定过程以及基于其的第一立体渲染图和第二立体渲染图的融合显示。
在本申请的实施例中,可以根据第一立体渲染图和第二立体渲染图各 自图中每个像素的空间深度值,确定第一立体渲染图中的像素与第二立体渲染图中相应位置处的像素各自对应的数据之间的空间位置关系,从而确定第一权重图。在确定第一立体渲染图中的像素与第二立体渲染图中相应位置处的像素各自对应的数据之间的空间位置关系时,可以以第一立体渲染图中像素的空间深度值作为参考标准来确定用于与第二立体渲染图中像素的空间深度值进行比较的有效空间深度数值区间,并基于比较结果确定第一立体渲染图中的像素与第二立体渲染图中相应位置处的像素各自对应的数据之间的空间位置关系;或者,也可以以第二立体渲染图中像素的空间深度值作为参考标准来确定用于与第一立体渲染图中像素的空间深度值进行比较的有效空间深度数值区间,并基于比较结果确定第一立体渲染图中的像素与第二立体渲染图中相应位置处的像素各自对应的数据之间的空间位置关系。其中,第一立体渲染图和第二立体渲染图中每个像素的空间深度值均可以包括一个或更多个空间深度范围,也就是说,第一立体渲染图和第二立体渲染图中每个像素的空间深度值均包括一个最小值和一个最大值(其中该最小值和最大值可以分别是每个像素的有效深度范围内的最小值和最大值,例如体绘制时通过设定的灰度阈值筛选出的有效深度范围内的最小值和最大值),因此可获取第一立体渲染图和第二立体渲染图中每个像素的空间深度值中的最小值和最大值以用于逐像素进行比较。
下面以第二立体渲染图中的像素的空间深度值作为参考标准为例来描述:对于第一立体渲染图和第二立体渲染图中任意一个位置处的像素,假定第二立体渲染图中该位置处的像素的空间深度值中的最小值为Y1、最大值为Y2,第一立体渲染图中该位置处的像素的空间深度值中的最小值为X1、最大值为X2,如果X1小于或等于Y1,则表示从用户视角看即该位置处造影体数据位于组织体数据的前部,则此时第一空间位置权重图Map中该位置处的值可以设置为1,即在该位置处只显示造影信号;如果X2大于或等于Y2,则表示从用户视角看即该位置处造影体数据位于组织体数据的背部,则此时第一空间位置权重图Map中该位置处的值可以设置为0,即在该位置处只显示组织信号;如果X 1大于Y1且X2小于Y2,则表示从用户视角看即该位置处造影体数据位于组织体数据的内部,则此时第一空间位置权重图Map中该位置处的值可以设置为0到1之间的值,即在 该位置处按照一定比例显示造影信号和组织信号,具体的比例可以按照用户需求或者其他预设需求而设置。以此方式,可以对第一立体渲染图和第二立体渲染图中的各像素位置处的权重进行设置,从而求得第一空间位置权重图Map。以上,以第二立体渲染图的空间深度值作为参考标准进行示例性说明,还可以将第一立体渲染图的空间深度值作为参考标准进行考量,本申请不做限定。此外,以上,以权重值之和为1进行示例性说明,对权重的取值范围,本申请也不做限定。
基于上述确定的第一空间位置权重图Map,可进行第一立体渲染图和第二立体渲染图的融合显示,第一立体渲染图和第二立体渲染图融合后得到的第三立体渲染图(即混合渲染图像)各像素点的颜色值的计算公式(融合方式)可以表示为:
Color Total=Color C·Map+Color B·(1-Map)
其中,Color Total为融合后的颜色值,Color C为第一立体渲染图(造影图)中像素的颜色值,Color B为第二立体渲染图(组织图)中像素的颜色值,Map为第一空间位置权重图。
对于通过体绘制得到的第一立体渲染图(第二立体渲染图通过面绘制或者体绘制而得到),可以获取第一立体渲染图和第二立体渲染图各自图中每个像素的空间深度值和第一立体渲染图中每个像素的累积不透明度值,以用于计算第一权重图。由于第一权重图的计算是基于第一立体渲染图和第二立体渲染图中各像素的空间深度值并基于第一立体渲染图中每个像素的累积不透明度值,因此此处可将第一权重图表示为weight,则第二权重图表示为1-weight,且第一权重图weight中每一点的值等于前述的第一空间位置权重图中每一点的值乘以第一立体渲染图中该位置处像素的累积不透明度值,即weight=Map*Opacity c
基于上述第一权重图weight,可进行第一立体渲染图和第二立体渲染图的融合显示,第一立体渲染图和第二立体渲染图融合后得到的第三立体渲染图(即混合渲染图像)各像素点的颜色值的计算公式(融合方式)可以表示为:
Color Total=Color C·weight+Color B·(1-weight)
weight=Map·Opacity C
其中,Color Total为融合后的颜色值,Color C为第一立体渲染图(造影图)中像素的颜色值,Color B为第二立体渲染图(组织图)中像素的颜色值,weight为第一权重图,Map为第一空间位置权重图,Opacity C为第一立体渲染图中像素的累积不透明度值。在第一立体渲染图是通过体绘制方式获得的情况下,在将第一立体渲染图与第二立体渲染图融合显示时,除了考虑前述的空间位置权重,还加入了第一立体渲染图中各像素的累积不透明度,这可以使得融合后得到的图像效果更为柔顺、边缘过渡更为自然。
以上结合图4示例性地示出了对容积造影数据和容积组织数据进行融合渲染的一个示例(即各自进行渲染后融合显示)。下面结合图5描述对容积造影数据和容积组织数据进行融合渲染的另一个示例。图5示出根据本申请实施例的超声造影成像方法中对容积造影数据和容积组织数据进行融合渲染的另一个示例的示意性流程框图。如图5所示,对容积造影数据(即前文中的第二造影数据)和容积组织数据(即前文中的第二组织数据)同时进行体绘制渲染,根据第二造影数据和第二组织数据的灰度信息、深度信息获取颜色值来得到混合渲染图像。
具体地,对第二造影数据和第二组织数据同时进行实时渲染,以得到混合渲染图像,可以进一步包括:对第二造影数据和第二组织数据同时进行体绘制,获取体绘制的过程中每根光线路径上每个采样点的空间深度值和灰度值,其中每个采样点的灰度值包括第二造影数据在该点的灰度值和/或第二组织数据在该点的灰度值;基于每根光线路径上每个采样点的空间深度值和灰度值获取每个采样点的颜色值,并基于每根光线路径上所有采样点的颜色值确定每根光线路径上的累积颜色值;基于每根光线路径上的累积颜色值确定第三立体渲染图中每个像素的颜色值,并将累积颜色值映射到第三立体渲染图中,以得到混合渲染图像。
其中,基于每根光线路径上每个采样点的空间深度值和灰度值获取每个采样点的颜色值,可以包括:根据预设三维颜色索引表,基于每根光线路径上每个采样点的空间深度值和灰度值获取每个采样点的颜色值,三维颜色索引表中的三维变量分别为造影灰度值、组织灰度值和空间深度值,三维变量对应于一个颜色值;或者,根据预定映射函数,基于每根光线路径上每个采样点的空间深度值和灰度值获取每个采样点的颜色值,预定映 射函数包括三个变量,分别为造影灰度值、组织灰度值和空间深度值,预定映射函数的函数结果为颜色值。
在该实施例中,采用光线追踪算法,基于视线方向发射多根穿过造影体数据和组织体数据的光线,每一根光线按固定步长进行递进,对光线路径上的造影体数据和组织体数据进行采样,得到每个采样点的造影体数据的灰度值和/或组织体数据的灰度值,再结合当前光线的步进深度信息来索引三维颜色表得到颜色值或根据预定映射函数得到颜色值,从而得到每个采样点的颜色值,再对每一根光线路径上各采样点的颜色值进行累积,并将该累积颜色值映射到二维图像的一个像素上,通过如此方式得到所有光线路径各自对应的像素的颜色值,即可得到VR渲染图,从而得到最终的混合渲染图像。也就是说,对第二造影数据和第二组织数据同时进行渲染,以得到混合渲染图像,用公式表示可以为:
Color ray=3DColorTexture(value C,value B,depth)
Figure PCTCN2020096627-appb-000001
其中,Color ray为当前采样点的颜色值,value C为当前采样点的造影灰度值,value C为当前采样点的组织灰度值,depth为当前采样点的光线深度信息,3DColorTexture()为三维颜色索引表或预定映射函数,Color Total为当前光线路径上各采样点的累积颜色值,start表示当前光线路径上的第一个采样点,end表示当前光线路径上的最后一个采样点。
在步骤S230,实时显示该混合渲染图像。
在一个示例中,该混合渲染图像包含对上述第二造影数据进行实时渲染得到的至少一部分渲染图,以及对上述第二组织数据进行实时渲染得到的至少一部分渲染图。
需要说明的是,本申请能够实现超声容积造影和容积组织混合的实时成像,即实时地采集组织与造影的容积数据,并实时渲染后显示组织与造影的混合图像。一般来说它的成像帧频在0.8VPS(Volume Per Seconds)以上。相较于CT、MRI等非实时成像,本申请能够大大减少成像过程的耗时。
如前文所述的,上述第二造影数据和第二组织数据均为容积数据(即三维或四维数据),因此,基于前述的步骤S210到S220,可得到一帧混合渲染图像或多帧混合渲染图像。在本申请的实施例中,当得到多帧混合渲染图像,可以将所述多帧混合渲染图像进行多帧动态显示,例如将所述多帧混合渲染图像按照时间顺序动态显示。示例性地,对于每帧混合渲染图像,可以以不同的图像特征(例如不同的颜色)来显示其中表示造影数据的部分或表示组织数据的部分。例如,以黄色来显示混合渲染图像中表示造影数据的部分,以灰色来显示混合渲染图像中表示组织数据的部分。这样,在多帧混合渲染图像动态显示的过程中,用户可观察到造影剂与组织之间空间位置关系的实时变化过程。
在本申请的一个实施例中,上述目标组织可包括输卵管区域,进一步的,还可以对混合渲染图像进行特征提取,并基于特征提取的结果输出对目标对象的输卵管区域的分析结果。
需要说明的是,基于步骤S230所得到的混合渲染图像,可以基于对混合渲染图像提取的特征获取该混合渲染图像中呈现的输卵管的分析结果,以为目标对象的输卵管的诊断提供依据。在得到不止一帧混合渲染图像时,可以对每帧混合渲染图像进行特征提取并输出每帧混合渲染图像对应的输卵管区域分析结果,也可以结合多帧混合渲染图像的特征提取结果输出其中一帧混合渲染图像对应的输卵管区域分析结果(诸如结合N帧混合渲染图像的特征提取结果输出仅最后一帧即第N帧混合渲染图像对应的输卵管区域分析结果,此处N为大于1的自然数)。
在本申请的实施例中,可以基于图像处理算法对每帧混合渲染图像进行的特征提取,诸如采用主成分分析(Principal Components Analysis,简称为PCA)、线性判别分析(Linear Discriminant Analysis,简称为LDA)、哈尔(Harr)特征、纹理特征等算法。在本申请的实施例中,还可以基于神经网络对每帧混合渲染图像进行特征提取,诸如采用AlexNet、VGG、ResNet、MobileNet、DenseNet、EfficientNet、EfficientDet等。
在本申请的实施例中,基于特征提取的结果输出对输卵管区域的分析结果,可以包括:将特征提取的结果与数据库中存储的特征进行匹配,采用判别器进行分类,并输出分类结果以作为对输卵管区域的分析结果。示例性地, 判别器可以包括但不限于K最近邻(K-Nearest Neighbor,简称为KNN)、支持向量机(Support Vector Machines,简称为SVM)、随机森林、神经网络等。
在本申请的实施例中,对输卵管区域的分析结果可以包括目标对象的输卵管的至少一个相关属性。示例性地,相关属性可以包括通畅性属性、形状属性、伞端是否积水的属性以及是否存在囊肿的属性。其中,通畅性属性可以包括:正常、通而不畅、阻塞、缺失等等;形状属性可以包括扭曲、过长、过短等等。此外,对输卵管区域的分析结果还可以包括所确定的相关属性的概率值,诸如输卵管通而不畅的概率值、输卵管扭曲的概率值等等。示例性地,每个相关属性的概率值的数值范围可以为0到100%。如前所述的,可以通过对每帧混合渲染图像的特征提取以及分类输出相应的分析结果,即基于一帧或若干帧混合渲染图像确定的目标对象的输卵管的上述相关属性中的至少一个以及每个相关属性的概率值。
在本申请的进一步的实施例中,对输卵管区域的分析结果还可以目标对象的输卵管的评分结果,该评分结果可以是基于输出的每个相关属性以及每个相关属性的概率值而确定的。在一个示例中,通过特征提取以及判别器分类确定目标对象的输卵管的通畅性属性为正常,且概率为100%,则其评分可以为正常100。在另一个示例中,通过特征提取以及判别器分类确定目标对象的输卵管的通畅性属性为阻塞,且概率为100%,则其评分可以为阻塞100。在其他示例中,还可以通过多个相关属性各自的概率值确定一个综合评分。
在本申请的实施例中,可以在至少一帧混合渲染图像上标注出其对应的输卵管分析结果,并将经标注的混合渲染图像显示给用户,例如显示正常输卵管的混合渲染图像,其标注的评分结果——正常:100;又例如显示阻塞输卵管的混合渲染图像,其标注的评分结果——阻塞:100。在该实施例中,将标注有输卵管分析结果的混合渲染图像显示给用户(诸如医生),混合渲染图像中既能看到造影区域又能看到组织区域,可以使得用户直观地理解、观察造影剂在组织内的空间位置关系以及流动情况,混合渲染图像的标注结果使得用户直观了解目标对象的输卵管自动分析结果,为医生的诊断提供参考,有助于医生进一步提高诊断效率。在其他实施例中,也可以将混合渲染图像和输卵管分析结果各自单独显示。
在本申请的进一步的实施例中,在上述多帧动态显示的基础上还可以 进行伪彩显示。例如,对于当前帧混合渲染图像中相对于上一帧混合渲染图像新增的位于组织数据前部的可显示的造影数据,可以以不同于先前的颜色进行显示,以显示造影数据最新出现在组织数据中的位置。例如,在先前的示例中,以黄色来显示混合渲染图像中表示造影数据的部分,在该实施例中,可以以与黄色不同的颜色,诸如蓝色来显示表示新增可显示造影数据的部分。这样,在多帧混合渲染图像动态显示的过程中,用户不仅可观察到造影剂与组织之间空间位置关系的实时变化过程,还可观察到造影剂在组织中的流动情况。
在本申请的进一步的实施例中,在得到当前帧混合渲染图像后,可接收用户指令,以根据用户指令调节对当前帧混合渲染图像的显示情况。例如,用户期望当前帧混合渲染图像中全部显示组织数据、或者全部显示造影数据、或者以期望的透明度显示组织数据和造影数据等,则可根据用户指令,调节用于当前帧的融合显示的前述权重图中的权重,以得到用户期望的显示效果。该实施例可实现当前帧混合渲染图像支持用户可调,从而实现更为灵活的容积造影与组织混合成像。
以上示例性地示出了根据本申请实施例的超声造影成像方法对容积造影数据和容积组织数据进行融合渲染的过程,最终得到的容积造影数据与容积组织数据的混合渲染图像可以如图6所示的。图6示出根据本申请实施例的超声造影成像方法得到的混合渲染图像的示例性示意图。如图6所示的,混合渲染图像中既能看到造影区域又能看到组织区域,能够帮助用户更为直观地理解、观察造影剂在组织内的实时空间位置关系,以及获取更多的临床信息。
基于上面的描述,根据本申请实施例的超声造影成像方法同时采集容积造影数据和容积组织数据,对二者进行融合渲染得到混合渲染图像,能够帮助用户更为直观地理解、观察造影剂在组织内的实时空间位置关系,以及获取更多的临床信息。
下面结合图7到图8描述根据申请另一方面提供的超声成像装置。图7示出了根据本申请一个实施例的超声成像装置700的示意性框图。如图7所示,超声成像装置700可以包括发射/接收序列控制器710、超声探头720、处理器730和显示器740。其中,发射/接收序列控制器710用于控制超声 探头720向含有造影剂的目标组织发射超声波,接收超声波的回波,并基于超声波的回波实时获取第一造影数据和第一组织数据,第一造影数据和第一组织数据均为容积数据。处理器730用于对第二造影数据和第二组织数据进行实时渲染,以得到第二造影数据和第二组织数据的混合渲染图像,其中,第二造影数据包括第一造影数据的全部或部分数据,第二组织数据包括第一组织数据的全部或部分数据;显示器740用于实时显示混合渲染图像。
在本申请的一个实施例中,部分数据包括感兴趣区域对应的数据,处理器730还可以用于:从第一造影数据中提取感兴趣区域对应的数据,以作为第二造影数据;和/或,从第一组织数据中提取感兴趣区域对应的数据,以作为第二组织数据。
在本申请的一个实施例中,处理器730对第二造影数据和第二组织数据进行实时渲染,以得到第二造影数据和第二组织数据的混合渲染图像,可以进一步包括:对第二造影数据和第二组织数据各自进行实时渲染,并将各自渲染后得到的渲染结果进行融合,以得到混合渲染图像;或者对第二造影数据和第二组织数据同时进行实时渲染,以得到混合渲染图像。
在本申请的一个实施例中,处理器730对第二造影数据和第二组织数据各自进行实时渲染,并将各自渲染后得到的渲染结果进行融合,以得到混合渲染图像,可以进一步包括:对第二造影数据进行实时渲染得到第一立体渲染图,并获取第一立体渲染图中每个像素的颜色值和空间深度值;对第二组织数据进行实时渲染得到第二立体渲染图,并获取第一立体渲染图中每个像素的颜色值和空间深度值;基于第一立体渲染图中每个像素的空间深度值和第二立体渲染图中对应位置处像素的空间深度值确定第一立体渲染图中每个像素与第二立体渲染图中对应位置处像素在颜色值融合时各自的权重;基于第一立体渲染图中每个像素与第二立体渲染图中对应位置处像素在颜色值融合时各自的权重计算第三立体渲染图中每个像素的颜色值,并将所计算的颜色值映射到第三立体渲染图中,以得到混合渲染图像。
在本申请的一个实施例中,处理器730对第二造影数据和对第二组织数据进行实时渲染的渲染模式可以均为面绘制。
在本申请的一个实施例中,处理器730对第二造影数据和/或对第二组织数据进行实时渲染的渲染模式可以为体绘制,处理器730确定第一立体渲染图中每个像素与第二立体渲染图中对应位置处像素在颜色值融合时各自的权重还可以基于第一立体渲染图中每个像素的累积不透明度和/或第二立体渲染图中每个像素的累积不透明度。
在本申请的一个实施例中,处理器730对第二造影数据和第二组织数据同时进行实时渲染,以得到混合渲染图像,可以进一步包括:对第二造影数据和第二组织数据同时进行体绘制,获取体绘制的过程中每根光线路径上每个采样点的空间深度值和灰度值,其中每个采样点的灰度值包括第二造影数据在该点的灰度值和/或第二组织数据在该点的灰度值;基于每根光线路径上每个采样点的空间深度值和灰度值获取每个采样点的颜色值,并基于每根光线路径上所有采样点的颜色值确定每根光线路径上的累积颜色值;基于每根光线路径上的累积颜色值确定第三立体渲染图中每个像素的颜色值,并将累积颜色值映射到第三立体渲染图中,以得到混合渲染图像。
在本申请的一个实施例中,处理器730基于每根光线路径上每个采样点的空间深度值和灰度值获取每个采样点的颜色值,可以包括:根据预设三维颜色索引表,基于每根光线路径上每个采样点的空间深度值和灰度值获取每个采样点的颜色值,三维颜色索引表中的三维变量分别为造影灰度值、组织灰度值和空间深度值,三维变量对应于一个颜色值;或者根据预定映射函数,基于每根光线路径上每个采样点的空间深度值和灰度值获取每个采样点的颜色值,预定映射函数包括三个变量,分别为造影灰度值、组织灰度值和空间深度值,预定映射函数的函数结果为颜色值。
在本申请的一个实施例中,处理器730对感兴趣区域对应的数据的提取可以是基于深度学习装置来实现的。
在本申请的一个实施例中,超声探头720基于超声波的回波获取第一造影数据和第一组织数据,可以进一步包括:基于超声波的回波获取第一造影信号和第一组织信号;基于第一造影信号实时获取第一造影数据,并基于第一组织信号实时获取第一组织数据。
总体上,根据本申请实施例的超声成像装置700可以用于执行前文描 述的根据本申请实施例的超声造影成像方法200,本领域技术人员可以结合前文的描述理解超声成像装置700的结构及操作,为了简洁,对于上文中的一些细节,此处不再赘述。
基于上面的描述,根据本申请实施例的超声成像装置同时采集容积造影数据和容积组织数据,对二者进行融合渲染得到混合渲染图像,能够帮助用户更为直观地理解、观察造影剂在组织内的实时空间位置关系,以及获取更多的临床信息。
图8示出了根据本申请实施例的超声成像装置800的示意性框图。超声成像装置800包括存储器810以及处理器820。
其中,存储器810存储用于实现根据本申请实施例的超声造影成像方法200中的相应步骤的程序。处理器820用于运行存储器810中存储的程序,以执行根据本申请实施例的超声造影成像方法200的相应步骤。
根据本申请的又一方面,还提供了一种超声造影成像方法,该方法包括:控制超声探头向含有造影剂的目标组织发射超声波,接收超声波的回波,基于超声波的回波实时获取第一造影数据和第一组织数据,第一造影数据和第一组织数据均为容积数据;对第一造影数据进行实时渲染得到第一立体渲染图,对第一组织数据进行实时渲染得到第二立体渲染图;同时显示第一立体渲染图和第二立体渲染图。在该实施例中,从超声波的回波中获取容积造影数据和容积组织数据,将其各自实时渲染后得到各自的立体渲染图在同一界面上同时显示出来,也能够帮助用户观察造影剂在组织内的实时空间位置关系,以及获取更多的临床信息。
根据本申请的再一方面,还提供了一种超声成像装置,该超声成像装置可以用于实施上述的超声造影成像方法。具体地,该超声成像装置可以包括超声探头、发射/接收序列控制器、处理器和显示器,其中:发射/接收序列控制器用于控制超声探头向含有造影剂的目标组织发射超声波,接收超声波的回波,并基于超声波的回波实时获取第一造影数据和第一组织数据,第一造影数据和第一组织数据均为容积数据;处理器用于对第一造影数据进行实时渲染得到第一立体渲染图,对第一组织数据进行实时渲染得到第二立体渲染图;显示器用于同时并实时显示第一立体渲染图和第二立体渲染图。本领域技术人员可以结合前文的描述理解该超声成像装置的结 构及操作,为了简洁,对于上文中的一些细节,此处不再赘述。
此外,根据本申请实施例,还提供了一种存储介质,在存储介质上存储了程序指令,在程序指令被计算机或处理器运行时用于执行本申请实施例的超声造影成像方法的相应步骤。存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。计算机可读存储介质可以是一个或多个计算机可读存储介质的任意组合。
此外,根据本申请实施例,还提供了一种计算机程序,该计算机程序可以存储在云端或本地的存储介质上。在该计算机程序被计算机或处理器运行时用于执行本申请实施例的超声造影成像方法的相应步骤。
基于上面的描述,根据本申请实施例的超声造影成像方法、超声成像装置和存储介质同时采集容积造影数据和容积组织数据,对二者进行融合渲染得到混合渲染图像,能够帮助用户更为直观地理解、观察造影剂在组织内的实时空间位置关系,以及获取更多的临床信息。
尽管这里已经参考附图描述了示例实施例,应理解上述示例实施例仅仅是示例性的,并且不意图将本申请的范围限制于此。本领域普通技术人员可以在其中进行各种改变和修改,而不偏离本申请的范围和精神。所有这些改变和修改意在被包括在所附权利要求所要求的本申请的范围之内。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其他的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本申请的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本申请并帮助理解各个发明方面中的一个或多个,在对本申请的示例性实施例的描述中,本申请的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该本申请的方法解释成反映如下意图:即所要求保护的本申请要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如相应的权利要求书所反映的那样,其发明点在于可以用少于某个公开的单个实施例的所有特征的特征来解决相应的技术问题。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本申请的单独实施例。
本领域的技术人员可以理解,除了特征之间相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者装置的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此的一些实施例包括其他实施例中所包括的某些特征而不是其他特征,但是不同实施例的特征的组合意味着处于本申请的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本申请的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本申请实施例的一些模块的一些或者全部功能。本申请还可以实现为用于执行这里所描述的方法的一部分或者全部的装置程序(例如,计算机程序和计算机程序产品)。这样的实现本申请的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本申请进行说明而不是对本申请进行限 制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。本申请可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
以上,仅为本申请的具体实施方式或对具体实施方式的说明,本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。本申请的保护范围应以权利要求的保护范围为准。

Claims (27)

  1. 一种超声造影成像方法,其特征在于,所述方法包括:
    控制超声探头向含有造影剂的目标组织发射超声波,接收所述超声波的回波,并基于所述超声波的回波实时获取第一造影数据和第一组织数据,所述第一造影数据和第一组织数据均为容积数据;
    对第二造影数据和第二组织数据进行实时渲染,以得到所述第二造影数据和所述第二组织数据的混合渲染图像,其中,所述第二造影数据包括所述第一造影数据的全部或部分数据,所述第二组织数据包括所述第一组织数据的全部或部分数据;
    实时显示所述混合渲染图像。
  2. 根据权利要求1所述的方法,其特征在于,所述部分数据包括感兴趣区域对应的数据,所述方法还包括:
    从所述第一造影数据中提取感兴趣区域对应的数据,以作为所述第二造影数据;和/或,从所述第一组织数据中提取感兴趣区域对应的数据,以作为所述第二组织数据。
  3. 根据权利要求1或2所述的方法,其特征在于,所述对第二造影数据和第二组织数据进行实时渲染,以得到所述第二造影数据和所述第二组织数据的混合渲染图像,进一步包括:
    对所述第二造影数据和所述第二组织数据各自进行实时渲染,并将各自渲染后得到的渲染结果进行融合,以得到所述混合渲染图像;或者
    对所述第二造影数据和所述第二组织数据同时进行实时渲染,以得到所述混合渲染图像。
  4. 根据权利要求3所述的方法,其特征在于,所述对所述第二造影数据和所述第二组织数据各自进行实时渲染,并将各自渲染后得到的渲染结果进行融合,以得到所述混合渲染图像,进一步包括:
    对所述第二造影数据进行实时渲染得到第一立体渲染图,并获取所述第一立体渲染图中每个像素的颜色值和空间深度值;
    对所述第二组织数据进行实时渲染得到第二立体渲染图,并获取所述第二立体渲染图中每个像素的颜色值和空间深度值;
    基于所述第一立体渲染图中每个像素的空间深度值和所述第二立体 渲染图中对应位置处像素的空间深度值确定所述第一立体渲染图中每个像素与所述第二立体渲染图中对应位置处像素在颜色值融合时各自的权重;
    基于所述第一立体渲染图中每个像素与所述第二立体渲染图中对应位置处像素在颜色值融合时各自的权重计算第三立体渲染图中每个像素的颜色值,并将所计算的颜色值映射到所述第三立体渲染图中,以得到所述混合渲染图像。
  5. 根据权利要求4所述的方法,其特征在于,对所述第二造影数据和对所述第二组织数据进行实时渲染的渲染模式均为面绘制。
  6. 根据权利要求4所述的方法,其特征在于,对所述第二造影数据和/或对所述第二组织数据进行实时渲染的渲染模式为体绘制,所述确定所述第一立体渲染图中每个像素与所述第二立体渲染图中对应位置处像素在颜色值融合时各自的权重还基于所述第一立体渲染图中每个像素的累积不透明度值和/或所述第二立体渲染图中每个像素的累积不透明度值。
  7. 根据权利要求3所述的方法,其特征在于,所述对所述第二造影数据和所述第二组织数据同时进行实时渲染,以得到所述混合渲染图像,进一步包括:
    对所述第二造影数据和所述第二组织数据同时进行体绘制,获取所述体绘制的过程中每根光线路径上每个采样点的空间深度值和灰度值,其中每个采样点的灰度值包括所述第二造影数据在该点的灰度值和/或所述第二组织数据在该点的灰度值;
    基于所述每根光线路径上每个采样点的空间深度值和灰度值获取所述每个采样点的颜色值,并基于每根光线路径上所有采样点的颜色值确定每根光线路径上的累积颜色值;
    基于所述每根光线路径上的累积颜色值确定第三立体渲染图中每个像素的颜色值,并将所述累积颜色值映射到所述第三立体渲染图中,以得到所述混合渲染图像。
  8. 根据权利要求7所述的方法,其特征在于,所述基于所述每根光线路径上每个采样点的空间深度值和灰度值获取所述每个采样点的颜色值,包括:
    根据预设三维颜色索引表,基于所述每根光线路径上每个采样点的空 间深度值和灰度值获取所述每个采样点的颜色值,所述三维颜色索引表中的三维变量分别为造影灰度值、组织灰度值和空间深度值,所述三维变量对应于一个颜色值;或者,
    根据预定映射函数,基于所述每根光线路径上每个采样点的空间深度值和灰度值获取所述每个采样点的颜色值,所述预定映射函数包括三个变量,分别为造影灰度值、组织灰度值和空间深度值,所述预定映射函数的函数结果为颜色值。
  9. 根据权利要求1所述的方法,其特征在于,所述混合渲染图像包含对所述第二造影数据进行实时渲染得到的至少一部分渲染图,以及对所述第二组织数据进行实时渲染得到的至少一部分渲染图。
  10. 根据权利要求1-9中的任一项所述的方法,其特征在于,所述基于所述超声波的回波实时获取第一造影数据和第一组织数据,进一步包括:
    基于所述超声波的回波获取第一造影信号和第一组织信号;
    基于所述第一造影信号实时获取所述第一造影数据,并基于所述第一组织信号实时获取所述第一组织数据。
  11. 根据权利要求1-9中的任一项所述的方法,其特征在于,所述目标组织包括输卵管区域,所述方法还包括:
    对所述混合渲染图像进行特征提取,并基于所述特征提取的结果输出对所述输卵管区域的分析结果;
    显示所述分析结果。
  12. 一种超声成像装置,其特征在于,所述装置包括超声探头、发射/接收序列控制器、处理器和显示器,其中:
    所述发射/接收序列控制器用于控制所述超声探头向含有造影剂的目标组织发射超声波,接收所述超声波的回波,并基于所述超声波的回波实时获取第一造影数据和第一组织数据,所述第一造影数据和第一组织数据均为容积数据;
    所述处理器用于对第二造影数据和第二组织数据进行实时渲染,以得到所述第二造影数据和所述第二组织数据的混合渲染图像,其中,所述第二造影数据包括所述第一造影数据的全部或部分数据,所述第二组织数据包括所述第一组织数据的全部或部分数据;
    所述显示器用于实时显示所述混合渲染图像。
  13. 根据权利要求12所述的装置,其特征在于,所述部分数据包括感兴趣区域对应的数据,所述处理器还用于:
    从所述第一造影数据中提取感兴趣区域对应的数据,以作为所述第二造影数据;和/或,从所述第一组织数据中提取感兴趣区域对应的数据,以作为所述第二组织数据。
  14. 根据权利要求12或13所述的装置,其特征在于,所述处理器对第二造影数据和第二组织数据进行实时渲染,以得到所述第二造影数据和所述第二组织数据的混合渲染图像,进一步包括:
    对所述第二造影数据和所述第二组织数据各自进行实时渲染,并将各自渲染后得到的渲染结果进行融合,以得到所述混合渲染图像;或者
    对所述第二造影数据和所述第二组织数据同时进行实时渲染,以得到所述混合渲染图像。
  15. 根据权利要求14所述的装置,其特征在于,所述处理器对所述第二造影数据和所述第二组织数据各自进行实时渲染,并将各自渲染后得到的渲染结果进行融合,以得到所述混合渲染图像,进一步包括:
    对所述第二造影数据进行实时渲染得到第一立体渲染图,并获取所述第一立体渲染图中每个像素的颜色值和空间深度值;
    对所述第二组织数据进行实时渲染得到第二立体渲染图,并获取所述第二立体渲染图中每个像素的颜色值和空间深度值;
    基于所述第一立体渲染图中每个像素的空间深度值和所述第二立体渲染图中对应位置处像素的空间深度值确定所述第一立体渲染图中每个像素与所述第二立体渲染图中对应位置处像素在颜色值融合时各自的权重;
    基于所述第一立体渲染图中每个像素与所述第二立体渲染图中对应位置处像素在颜色值融合时各自的权重计算第三立体渲染图中每个像素的颜色值,并将所计算的颜色值映射到所述第三立体渲染图中,以得到所述混合渲染图像。
  16. 根据权利要求15所述的装置,其特征在于,所述处理器对所述第二造影数据和对所述第二组织数据进行实时渲染的渲染模式均为面绘制。
  17. 根据权利要求15所述的装置,其特征在于,所述处理器对所述 第二造影数据和/或对所述第二组织数据进行实时渲染的渲染模式为体绘制,所述处理器确定所述第一立体渲染图中每个像素与所述第二立体渲染图中对应位置处像素在颜色值融合时各自的权重还基于所述第一立体渲染图中每个像素的累积不透明度和/或所述第二立体渲染图中每个像素的累积不透明度。
  18. 根据权利要求14所述的装置,其特征在于,所述处理器对所述第二造影数据和所述第二组织数据同时进行实时渲染,以得到所述混合渲染图像,进一步包括:
    对所述第二造影数据和所述第二组织数据同时进行体绘制,获取所述体绘制的过程中每根光线路径上每个采样点的空间深度值和灰度值,其中每个采样点的灰度值包括所述第二造影数据在该点的灰度值和/或所述第二组织数据在该点的灰度值;
    基于所述每根光线路径上每个采样点的空间深度值和灰度值获取所述每个采样点的颜色值,并基于每根光线路径上所有采样点的颜色值确定每根光线路径上的累积颜色值;
    基于所述每根光线路径上的累积颜色值确定第三立体渲染图中每个像素的颜色值,并将所述累积颜色值映射到所述第三立体渲染图中,以得到所述混合渲染图像。
  19. 根据权利要求18所述的装置,其特征在于,所述处理器基于所述每根光线路径上每个采样点的空间深度值和灰度值获取所述每个采样点的颜色值,包括:
    根据预设三维颜色索引表,基于所述每根光线路径上每个采样点的空间深度值和灰度值获取所述每个采样点的颜色值,所述三维颜色索引表中的三维变量分别为造影灰度值、组织灰度值和空间深度值,所述三维变量对应于一个颜色值;或者,
    根据预定映射函数,基于所述每根光线路径上每个采样点的空间深度值和灰度值获取所述每个采样点的颜色值,所述预定映射函数包括三个变量,分别为造影灰度值、组织灰度值和空间深度值,所述预定映射函数的函数结果为颜色值。
  20. 根据权利要求14所述的装置,其特征在于,所述混合渲染图像 包含对所述第二造影数据进行实时渲染得到的至少一部分渲染图,以及对所述第二组织数据进行实时渲染得到的至少一部分渲染图。
  21. 根据权利要求12-20中的任一项所述的装置,其特征在于,所述超声探头基于所述超声波的回波实时获取第一造影数据和第一组织数据,进一步包括:
    基于所述超声波的回波获取第一造影信号和第一组织信号;
    基于所述第一造影信号实时获取所述第一造影数据,并基于所述第一组织信号实时获取所述第一组织数据。
  22. 根据权利要求12-20中的任一项所述的装置,其特征在于,所述目标组织包括输卵管区域,
    所述处理器还用于对所述混合渲染图像进行特征提取,并基于所述特征提取的结果输出对所述输卵管区域的分析结果;
    所述显示器还用于显示所述分析结果。
  23. 一种超声造影成像方法,其特征在于,所述方法包括:
    控制超声探头向含有造影剂的目标组织发射超声波,接收所述超声波的回波,并基于所述超声波的回波实时获取第一造影数据和第一组织数据,所述第一造影数据和第一组织数据均为容积数据;
    对所述第一造影数据和所述第一组织数据进行实时渲染,以得到所述第一造影数据和所述第一组织数据的混合渲染图像;
    实时显示所述混合渲染图像。
  24. 一种超声造影成像方法,其特征在于,所述方法包括:
    控制超声探头向含有造影剂的目标组织发射超声波,接收所述超声波的回波,并基于所述超声波的回波实时获取第一造影数据和第一组织数据,所述第一造影数据和第一组织数据均为容积数据;
    对所述第一造影数据进行实时渲染得到第一立体渲染图,对所述第一组织数据进行实时渲染得到第二立体渲染图;
    同时并实时显示所述第一立体渲染图和所述第二立体渲染图。
  25. 一种超声成像装置,其特征在于,所述装置包括超声探头、发射/接收序列控制器、处理器和显示器,其中:
    所述发射/接收序列控制器用于控制所述超声探头向含有造影剂的目 标组织发射超声波,接收所述超声波的回波,并基于所述超声波的回波实时获取第一造影数据和第一组织数据,所述第一造影数据和第一组织数据均为容积数据;
    所述处理器用于对所述第一造影数据和所述第一组织数据进行实时渲染,以得到所述第一造影数据和所述第一组织数据的混合渲染图像;
    所述显示器用于实时显示所述混合渲染图像。
  26. 一种超声成像装置,其特征在于,所述装置包括超声探头、发射/接收序列控制器、处理器和显示器,其中:
    所述发射/接收序列控制器用于控制所述超声探头向含有造影剂的目标组织发射超声波,接收所述超声波的回波,并基于所述超声波的回波实时获取第一造影数据和第一组织数据,所述第一造影数据和第一组织数据均为容积数据;
    所述处理器用于对所述第一造影数据进行实时渲染得到第一立体渲染图,对所述第一组织数据进行实时渲染得到第二立体渲染图;
    所述显示器用于同时并实时显示所述第一立体渲染图和所述第二立体渲染图。
  27. 一种存储介质,其特征在于,所述存储介质上存储有计算机程序,所述计算机程序在运行时执行如权利要求1-11以及23-24中的任一项所述的超声造影成像方法。
PCT/CN2020/096627 2020-06-17 2020-06-17 超声造影成像方法、超声成像装置和存储介质 WO2021253293A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202080001014.9A CN111836584B (zh) 2020-06-17 2020-06-17 超声造影成像方法、超声成像装置和存储介质
PCT/CN2020/096627 WO2021253293A1 (zh) 2020-06-17 2020-06-17 超声造影成像方法、超声成像装置和存储介质
CN202410325546.8A CN118285839A (zh) 2020-06-17 2020-06-17 超声造影成像方法和超声成像装置
US18/081,300 US20230210501A1 (en) 2020-06-17 2022-12-14 Ultrasound contrast imaging method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/096627 WO2021253293A1 (zh) 2020-06-17 2020-06-17 超声造影成像方法、超声成像装置和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/081,300 Continuation US20230210501A1 (en) 2020-06-17 2022-12-14 Ultrasound contrast imaging method and device and storage medium

Publications (1)

Publication Number Publication Date
WO2021253293A1 true WO2021253293A1 (zh) 2021-12-23

Family

ID=72918765

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096627 WO2021253293A1 (zh) 2020-06-17 2020-06-17 超声造影成像方法、超声成像装置和存储介质

Country Status (3)

Country Link
US (1) US20230210501A1 (zh)
CN (2) CN118285839A (zh)
WO (1) WO2021253293A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767309A (zh) * 2020-12-30 2021-05-07 无锡祥生医疗科技股份有限公司 超声扫查方法、超声设备及系统
CN112837296A (zh) * 2021-02-05 2021-05-25 深圳瀚维智能医疗科技有限公司 基于超声视频的病灶检测方法、装置、设备及存储介质
CN116911164B (zh) * 2023-06-08 2024-03-29 西安电子科技大学 基于目标与背景分离散射数据的复合散射获取方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859434A (zh) * 2009-11-05 2010-10-13 哈尔滨工业大学(威海) 医学超声的基波和谐波图像融合方法
CN103077557A (zh) * 2013-02-07 2013-05-01 河北大学 一种自适应分层次胸部大数据显示的实现方法
CN110458836A (zh) * 2019-08-16 2019-11-15 深圳开立生物医疗科技股份有限公司 一种超声造影成像方法、装置和设备及可读存储介质
CN110893107A (zh) * 2018-09-12 2020-03-20 佳能医疗系统株式会社 超声波诊断装置、医用图像处理装置及非暂时性记录介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4421016B2 (ja) * 1999-07-01 2010-02-24 東芝医用システムエンジニアリング株式会社 医用画像処理装置
US7250949B2 (en) * 2003-12-23 2007-07-31 General Electric Company Method and system for visualizing three-dimensional data
WO2006095289A1 (en) * 2005-03-11 2006-09-14 Koninklijke Philips Electronics, N.V. System and method for volume rendering three-dimensional ultrasound perfusion images
JP5322522B2 (ja) * 2008-07-11 2013-10-23 株式会社東芝 超音波診断装置
JP5622374B2 (ja) * 2009-10-06 2014-11-12 株式会社東芝 超音波診断装置及び超音波画像生成プログラム
US9818220B2 (en) * 2011-12-28 2017-11-14 General Electric Company Method and system for indicating light direction for a volume-rendered image
KR102111626B1 (ko) * 2013-09-10 2020-05-15 삼성전자주식회사 영상 처리 장치 및 영상 처리 방법
US10002457B2 (en) * 2014-07-01 2018-06-19 Toshiba Medical Systems Corporation Image rendering apparatus and method
WO2018214063A1 (zh) * 2017-05-24 2018-11-29 深圳迈瑞生物医疗电子股份有限公司 超声设备及其三维超声图像显示方法
US11801031B2 (en) * 2018-05-22 2023-10-31 Canon Medical Systems Corporation Ultrasound diagnosis apparatus
CN111110277B (zh) * 2019-12-27 2022-05-27 深圳开立生物医疗科技股份有限公司 超声成像方法、超声设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859434A (zh) * 2009-11-05 2010-10-13 哈尔滨工业大学(威海) 医学超声的基波和谐波图像融合方法
CN103077557A (zh) * 2013-02-07 2013-05-01 河北大学 一种自适应分层次胸部大数据显示的实现方法
CN110893107A (zh) * 2018-09-12 2020-03-20 佳能医疗系统株式会社 超声波诊断装置、医用图像处理装置及非暂时性记录介质
CN110458836A (zh) * 2019-08-16 2019-11-15 深圳开立生物医疗科技股份有限公司 一种超声造影成像方法、装置和设备及可读存储介质

Also Published As

Publication number Publication date
CN111836584B (zh) 2024-04-09
CN118285839A (zh) 2024-07-05
CN111836584A (zh) 2020-10-27
US20230210501A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
WO2021253293A1 (zh) 超声造影成像方法、超声成像装置和存储介质
US11544893B2 (en) Systems and methods for data deletion
US11350905B2 (en) Waveform enhanced reflection and margin boundary characterization for ultrasound tomography
JP6877942B2 (ja) 医用画像処理装置及び医用画像処理プログラム
US20110125016A1 (en) Fetal rendering in medical diagnostic ultrasound
US20060262969A1 (en) Image processing method and computer readable medium
JP2012155723A (ja) 三次元医療映像から最適の二次元医療映像を自動的に生成する方法及び装置
JP2002078706A (ja) 3次元デジタル画像データの診断支援のための計算機支援診断方法及びプログラム格納装置
US7973787B2 (en) Method for picking on fused 3D volume rendered images and updating views
JP5194138B2 (ja) 画像診断支援装置およびその動作方法、並びに画像診断支援プログラム
KR20160094766A (ko) 의료 영상을 디스플레이 하기 위한 방법 및 장치
Chen et al. Real-time freehand 3D ultrasound imaging
US9759814B2 (en) Method and apparatus for generating three-dimensional (3D) image of target object
TW202033159A (zh) 圖像處理方法、裝置及系統、電子設備及電腦可讀儲存媒體
CN108876783B (zh) 图像融合方法及系统、医疗设备和图像融合终端
WO2024093911A1 (zh) 一种超声成像方法及超声设备
JP5579535B2 (ja) 胎児の肋骨数を測定する超音波システムおよび方法
CN111275617B (zh) 一种abus乳腺超声全景图的自动拼接方法、系统和存储介质
US9552663B2 (en) Method and system for volume rendering of medical images
Lawonn et al. Illustrative Multi-volume Rendering for PET/CT Scans.
JP2019098167A (ja) 情報処理装置、情報処理システム、情報処理方法およびプログラム
WO2022134049A1 (zh) 胎儿颅骨的超声成像方法和超声成像系统
CN113822837A (zh) 输卵管超声造影成像方法、超声成像装置和存储介质
US20230181165A1 (en) System and methods for image fusion
WO2012140396A1 (en) Biomedical visualisation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20940507

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20940507

Country of ref document: EP

Kind code of ref document: A1