CN110087553B - Ultrasonic device and three-dimensional ultrasonic image display method thereof - Google Patents

Ultrasonic device and three-dimensional ultrasonic image display method thereof Download PDF

Info

Publication number
CN110087553B
CN110087553B CN201780079242.6A CN201780079242A CN110087553B CN 110087553 B CN110087553 B CN 110087553B CN 201780079242 A CN201780079242 A CN 201780079242A CN 110087553 B CN110087553 B CN 110087553B
Authority
CN
China
Prior art keywords
subset
volume data
subsets
fusion
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780079242.6A
Other languages
Chinese (zh)
Other versions
CN110087553A (en
Inventor
梁天柱
邹耀贤
林穆清
龚闻达
朱磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Publication of CN110087553A publication Critical patent/CN110087553A/en
Application granted granted Critical
Publication of CN110087553B publication Critical patent/CN110087553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

A plurality of subsets in an ultrasonic three-dimensional volume data set are rendered distinctively by using a plurality of sets of display configurations, rendering results of the subsets are multiplied by respective fusion coefficients to be displayed in an overlapping mode, the opacity of the rendering results of the subsets is reduced through the fusion coefficients, rendering images of a plurality of areas or objects which are possibly mutually shielded from a certain visual angle can be displayed simultaneously, the simultaneous display is beneficial to a user to observe details of the areas or the objects simultaneously, so that the overall performance of volume data can be better mastered, and meanwhile, the user can visually see the division condition of the areas or the objects, so that the subsequent division of the areas or the objects can be adjusted.

Description

Ultrasonic device and three-dimensional ultrasonic image display method thereof
Technical Field
The present invention relates to an ultrasound apparatus, and more particularly, to a technique for displaying a three-dimensional ultrasound image on an ultrasound apparatus.
Background
During the use of medical ultrasonic instruments, it is often necessary to acquire three-dimensional volume data, form a volume data set, and render and display the volume data set to display a three-dimensional ultrasonic image of a measured tissue. In some cases, the ultrasound apparatus further performs region division on the volume data, for example, extracting an object or region of interest (such as a critical clinical structure and part of the face of the fetus) from the volume data, or dividing the volume data into a plurality of subsets corresponding to different objects or regions of interest, respectively, and then rendering and displaying each object or region. Regardless of rendering the three-dimensional ultrasound image or rendering each object or region, the same rendering is usually adopted in the medical ultrasound apparatus nowadays. A problem with such a display is that when multiple objects are in front-to-back overlap at a certain viewing angle, only the object or area of interest selected by the physician is usually rendered, which results in the physician seeing only the details of one object, but not the representations of multiple objects simultaneously and intuitively.
Technical problem
The application provides an ultrasonic device and a three-dimensional ultrasonic image display method thereof, so that a doctor can visually and simultaneously see rendered images of a plurality of objects on a display interface.
Solution to the problem
Technical solution
According to a first aspect, there is provided in an embodiment a method of displaying a three-dimensional ultrasound image, comprising:
acquiring ultrasonic three-dimensional volume data to obtain a volume data set;
identifying a plurality of subsets from the volume data set;
establishing a plurality of different sets of display configurations;
differentially rendering the plurality of subsets using a plurality of sets of display configurations;
acquiring a fusion coefficient of each subset;
and multiplying the rendering results of the plurality of subsets by respective fusion coefficients, and then performing superposition display.
According to a second aspect, an embodiment provides a three-dimensional ultrasound image display method, including:
establishing a plurality of different sets of display configurations;
differentially rendering a plurality of subsets of the ultrasound three-dimensional volume data set using a plurality of sets of display configurations;
acquiring a fusion coefficient of each subset;
and multiplying the rendering results of the plurality of subsets by respective fusion coefficients, and then performing superposition display.
According to a third aspect, there is also provided in another embodiment a three-dimensional ultrasound image display method including:
acquiring ultrasonic three-dimensional volume data aiming at fetal detection to obtain a volume data set;
identifying a plurality of subsets from the volume data set based on image characteristics of the fetus;
rendering part or all of the plurality of subsets to obtain a plurality of sub-images;
fusing part or all of the plurality of sub-images to obtain a three-dimensional image; and the combination of (a) and (b),
and displaying the three-dimensional image.
According to a fourth aspect, there is provided in an embodiment an ultrasound apparatus comprising:
the ultrasonic probe is used for transmitting ultrasonic waves to a region of interest in biological tissue and receiving echoes of the ultrasonic waves;
the transmitting/receiving sequence controller is used for generating a transmitting sequence and/or a receiving sequence, outputting the transmitting sequence and/or the receiving sequence to the ultrasonic probe, and controlling the ultrasonic probe to transmit ultrasonic waves to the region of interest and receive echoes of the ultrasonic waves;
the processor is used for generating ultrasonic three-dimensional volume data according to the ultrasonic echo data to obtain a volume data set, identifying a plurality of subsets from the volume data set, establishing a plurality of sets of different display configurations, performing distinctive rendering on the plurality of subsets by using the plurality of sets of display configurations to obtain a fusion coefficient of each subset, and performing superposition display after multiplying rendering results of the plurality of subsets by the respective fusion coefficients;
the human-computer interaction device comprises a display used for displaying the ultrasonic rendering image.
According to a fifth aspect, there is provided in another embodiment an ultrasound apparatus comprising:
the ultrasonic probe is used for transmitting ultrasonic waves to a region of interest in biological tissue and receiving echoes of the ultrasonic waves;
the transmitting/receiving sequence controller is used for generating a transmitting sequence and/or a receiving sequence, outputting the transmitting sequence and/or the receiving sequence to the ultrasonic probe, and controlling the ultrasonic probe to transmit ultrasonic waves to the region of interest and receive echoes of the ultrasonic waves;
the processor is used for acquiring ultrasonic three-dimensional volume data aiming at fetal detection to obtain a volume data set, identifying a plurality of subsets from the volume data set according to image characteristics of a fetus, rendering part or all of the plurality of subsets to obtain a plurality of sub-images, fusing part or all of the plurality of sub-images to obtain a three-dimensional image, and outputting the three-dimensional image to the display for display;
the human-computer interaction device comprises a display for displaying the ultrasonic three-dimensional image.
According to a sixth aspect, there is provided in an embodiment an ultrasound apparatus comprising:
a memory for storing a program;
a processor for implementing the above method by executing the program stored in the memory.
According to a seventh aspect, an embodiment provides a computer-readable storage medium comprising a program executable by a processor to implement the above method.
According to an eighth aspect, there is provided in an embodiment a three-dimensional ultrasound image display system comprising:
an acquisition unit for acquiring ultrasound three-dimensional volume data and obtaining a volume data set;
an identification unit for identifying a plurality of subsets from the volume data set;
a rendering unit for establishing a plurality of sets of different display configurations and performing a distinctive rendering of the plurality of subsets using the plurality of sets of display configurations;
and the fusion unit is used for acquiring the fusion coefficient of each subset, multiplying the rendering results of the plurality of subsets by the respective fusion coefficient and then performing superposition display.
Advantageous effects of the invention
Advantageous effects
According to the embodiment, the plurality of subsets are differentially rendered by using a plurality of sets of display configurations, the rendering results are fused and displayed, the display effect of reducing the opacity rendering parameters of the subsets is achieved through the fusion coefficient, so that each region or object presents a semi-transparent display effect, the rendering images of the overlapped regions or objects can be simultaneously displayed, the simultaneous display is beneficial for a user to simultaneously observe the details of each region or object, the overall expression of the volume data is better mastered, and meanwhile, the user can visually see the division condition of each region or object, so that the division of the regions or objects can be adjusted subsequently.
Brief description of the drawings
Drawings
FIG. 1 is a schematic diagram of an ultrasound apparatus in one embodiment;
FIG. 2 is a flow chart of displaying a three-dimensional ultrasound image in one embodiment;
FIG. 3 is a diagram illustrating a rendering display of a plurality of objects according to the prior art;
FIG. 4 is a diagram illustrating rendering and displaying of a plurality of objects according to an embodiment of the present invention;
FIG. 5 is a flow diagram of a ray tracing method in one embodiment;
FIG. 6 is a schematic diagram of an embodiment of ray tracing;
FIG. 7 is a diagram illustrating an exemplary control terminal;
FIG. 8 is a diagram illustrating adjustment of a portion of a superimposed display of two objects in an embodiment;
FIG. 9 is a flow diagram of adjusting partitions in one embodiment;
FIG. 10 is a flow diagram of one embodiment of a key removal barrier;
fig. 11 is a schematic structural diagram of a three-dimensional ultrasound image display system in an embodiment.
Examples of the invention
Modes for carrying out the invention
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings).
The first embodiment is as follows:
referring to fig. 1, an ultrasound apparatus 100 includes an ultrasound probe 101, a transmit/receive sequence control module 102, an echo processing module 104, a processor 105, a memory 103, and a human-computer interaction device 106. The processor 105 is respectively connected with the transmitting/receiving sequence controller 102, the memory 103 and the man-machine interaction device 106, the ultrasonic probe 101 is connected with the transmitting/receiving sequence controller 102, the ultrasonic probe 101 is also connected with the echo processing module 104, and the output end of the echo processing module 104 is connected with the processor 105.
The ultrasound probe 101 is used to transmit an ultrasound wave to a region of interest within the biological tissue 108 and receive an echo of the ultrasound wave. In this embodiment, the ultrasound probe 101 may be a volume probe or an area-array probe. The volume probe is internally composed of a conventional one-dimensional probe array element sequence and a built-in stepping motor for driving the array element sequence to swing. The stepping motor can make the scanning plane of the array element sequence swing back and forth along the normal direction of the array element sequence, transmit ultrasonic waves to different scanning planes and receive echoes of the ultrasonic waves, so that echo data of a plurality of scanning planes are obtained, and scanning of a three-dimensional space is realized. The area array probe is provided with thousands of array elements which are arranged in a matrix form, can directly transmit and receive ultrasonic echo data returned by scanning planes in different directions in a three-dimensional space, and simultaneously receives the ultrasonic echo data returned by the scanning planes in different directions, thereby realizing rapid three-dimensional space scanning.
The transmit/receive sequence controller 102 is configured to generate a transmit sequence and/or a receive sequence, output the transmit sequence and/or the receive sequence to the ultrasound probe, and control the ultrasound probe to transmit an ultrasound wave to a region of interest and receive an echo of the ultrasound wave. The transmit sequence is used to provide the number of transducers used for transmission in the ultrasound probe 101 and parameters (e.g., amplitude, frequency, number of times of wave transmission, wave transmission angle, wave pattern, etc.) for transmitting ultrasound waves to biological tissue, and the receive sequence is used to provide the number of transducers used for reception in the ultrasound probe 101 and parameters (e.g., angle of reception, depth, etc.) for its received echoes. The probe types are different, as are the transmit and receive sequences.
The echo processing module 104 is used for processing the ultrasound echoes, such as filtering, amplifying, and beam-forming the ultrasound echoes.
The memory 103 is used to store various data and programs, and for example, ultrasonic echo data may be stored in the memory.
The processor 105 is configured to execute the program in the memory 103, process the received data, and/or control various parts of the ultrasound apparatus, and in an embodiment of the present invention, the processor 105 is configured to generate ultrasound image data according to the ultrasound echo data, where the ultrasound image data may be image data for displaying a two-dimensional ultrasound image or volume data for displaying a three-dimensional ultrasound image. The processor generates a frame of two-dimensional ultrasonic image data according to the echo data of each scanning plane, and the two-dimensional ultrasonic image data can be directly output to the display for displaying. When a three-dimensional image of a tissue to be measured needs to be generated, the ultrasonic probe 101 transmits ultrasonic waves to a plurality of scanning planes and receives echo data of the plurality of scanning planes, the processor 105 generates a plurality of frames of two-dimensional images according to the echo data of the plurality of scanning planes, image reconstruction is performed according to the spatial position relationship of each scanning plane, for example, a voxel value of each point in a three-dimensional coordinate system is obtained through coordinate conversion, interpolation and other methods, for an ultrasonic image, the voxel value reflects the reflection capability of the position for the ultrasonic waves, and the voxel value of each point and the set of the spatial position relationship between the voxel values form a three-dimensional data set of the tissue to be measured. In a preferred embodiment, a series of processing such as smoothing and denoising can be performed on the three-dimensional volume data. The processor also identifies a plurality of subsets from the volume data set based on the difference in the objects. The processor is further configured to render the plurality of subsets to obtain a plurality of sub-images, for example, the plurality of subsets may be rendered identically to obtain a plurality of sub-images with the same display configuration using one set of display configurations, or the plurality of subsets may be rendered differentially using a plurality of sets of different display configurations to obtain a plurality of sub-images with different display configurations. And then fusing and displaying the rendered sub-images of the plurality of subsets to obtain fused images, for example, multiplying the plurality of sub-images corresponding to the rendering results of the plurality of subsets by the respective fusion coefficients according to the obtained fusion coefficients of the subsets, superimposing the result after superimposing, and outputting the result after superimposing to a display to display the fused three-dimensional ultrasonic image.
The human-computer interaction device 106 includes a control terminal 1061 and a display 1062, wherein a user inputs an instruction through the control panel 1061 or interacts with an image output on the display 1062, and the control terminal 1061 may be provided with a keyboard, an operation key, a gain key, a scroll wheel, a touch screen, or the like. The display 1062 is used to display various visual data output by the processor and presents the data to the user in the form of images, graphics, video, text, numbers, and/or characters.
Based on the above ultrasound device, in an embodiment, a scheme for displaying a three-dimensional ultrasound image is shown in fig. 2, and includes the following steps:
and step 10, acquiring ultrasonic three-dimensional volume data to obtain a volume data set. In general, a voxel value of a tissue to be measured is obtained from multi-frame two-dimensional ultrasound image data having a certain relative positional relationship, and a three-dimensional ultrasound volume data set is obtained.
A plurality of subsets are identified from the volume data set, step 11. In dividing the subsets, any of the following methods may be employed:
the volume data set is identified into a plurality of subsets according to different objects, each subset corresponds to a detection object, the object can be an organ tissue in the body, or a certain specific structure of the tissue, for example, the object can be a tissue of a heart, a liver, a uterus, or the like, or a specific structure of a fetal face in the uterus, a fetal limb, a placenta, an umbilical cord, or the like.
An object, such as a fetal face, is first identified from the volume data set, resulting in a subset of the object, with the other portions being a subset and not being distinguished.
A plurality of objects are identified from the volume data set, a subset of the plurality of objects is obtained, and the other parts are used as a subset and are not distinguished.
Any of the known image recognition techniques may be used from the volume data set based on the object recognition subset, for example:
and analyzing the ultrasonic three-dimensional data by adopting a mathematical model trained by a sample of the same type as the tested object to determine the three-dimensional data belonging to the tested object, and combining the three-dimensional data with a subset of the tested object.
Detecting the position of one or more measured object characteristics from the ultrasonic three-dimensional volume data by adopting an image processing and/or image segmentation algorithm according to the measured object characteristics, and determining the three-dimensional volume data belonging to the measured object according to the position of the measured object characteristics, wherein the three-dimensional volume data are combined into a subset of the measured object.
Outputting the ultrasonic three-dimensional volume data to a display to display a three-dimensional image, and determining three-dimensional volume data belonging to the measured object according to the position of one or more measured object characteristics determined by a user on the three-dimensional image and the position of the measured object characteristics, wherein the three-dimensional volume data are combined into a subset of the measured object.
When ultrasonic detection is carried out on a fetus, three-dimensional volume data of the fetus are obtained, a plurality of subsets can be identified from the volume data set according to image features of the fetus, the image features of the fetus can be face features of the fetus, limb structure features of the fetus, umbilical cord features of the fetus and the like, and the subsets identified according to the face features of the fetus are face subsets of the fetus and are used for generating face images of the fetus. The fetal facial features include: and (c) image characteristics corresponding to anatomical structures of one or more tissue structures on the face of the fetus in the ultrasonic three-dimensional volume data, wherein the 1 or more tissue structures are selected from the group consisting of eyes of the fetus, nose of the fetus, forehead of the fetus, chin of the fetus, cheek of the fetus, ears of the fetus, contour of the face of the fetus, mouth of the fetus and the like.
Step 12, rendering all or part of the plurality of subsets. In this embodiment, an example in which a plurality of subsets are distinctively rendered will be described. A plurality of different sets of display configurations are pre-established, and the contents of the display configurations include, but are not limited to:
1. selection of rendering mode, such as surface mode, HDlive mode, minimum mode, maximum mode, X-Ray mode, inversion mode, etc.;
2. selection of rendering parameters, such as pseudo color, hue, brightness, threshold, opacity, contrast, depth rendering parameters, and post-processing parameters of VR graphs, etc.;
3. selection of light sources, such as light source mode (point light source, parallel light source, etc.), light source position, etc.;
4. and selecting the preprocessing/postprocessing of the volume data, such as a gain gear, a smoothing gear, a denoising gear and the like.
In each set of display configuration, at least one item of content is different from other display configurations, so that a plurality of different sets of display configurations are formed, a plurality of subsets are rendered by adopting the plurality of sets of display configurations respectively, and each set of display configuration can render a rendering result. In rendering, each subset may be rendered using a different set of display configurations than the other subsets, resulting in a different rendering result than the other subsets. Or the plurality of subsets may be divided into at least two groups, each group being rendered using a different set of display configurations than the other groups, each group having a different rendering result than the other groups.
The method comprises the steps of rendering all or part of a plurality of subsets to obtain a plurality of sub-images, and performing fusion display on all or part of the plurality of sub-images to obtain an integral three-dimensional image, so that each sub-image in the three-dimensional image has a semi-transparent display effect, wherein semi-transparent means that the sub-image can display the volume data of the sub-image per se, but does not shield the display of the volume data of other sub-images, and thus the plurality of sub-images can be displayed simultaneously.
The fusion display of all or part of the plurality of sub-images can be performed in various manners, and in a specific embodiment, the fusion display is performed by using a fusion coefficient, as shown in steps 13-14.
And step 13, acquiring the fusion coefficient of each subset. In a specific embodiment, the fusion coefficient may be a fusion coefficient preset by the system, or may be a fusion coefficient set by the user, for example, a fusion coefficient is set for each object, the fusion coefficient of the object located in front is defined to be 1 or 0.6, the fusion coefficient of the object located in back is defined to be 0 or 0.4, and the user may set the front-back relative position relationship of the objects according to needs or clinical meanings. In some embodiments, an adaptive fusion coefficient may also be used, and the adaptive fusion coefficient may be different according to the position of the object, may be changed according to the thickness of the object, and may also be changed according to the density of the object. The adaptive fusion coefficients for each subset may be calculated according to various fusion rules. The fusion coefficient is a value between 0 and 1, but the sum of the fusion coefficients of the subsets may be equal to 1, or may be greater or less than 1.
And 14, multiplying the rendering results of the plurality of subsets by respective fusion coefficients, and then performing superposition display. Multiplying the rendering result obtained in step 12 by the respective fusion coefficient for each subset, and then adding the results multiplied by the fusion coefficients together is equivalent to attenuating the rendering result for each subset by the respective fusion coefficient, for example, the fusion coefficient for a subset is 0.4, which is equivalent to attenuating the rendering effect for the subset by 60%. Typically, the value of the opacity parameter in the rendering parameters of each subset is 1, and is visually opaque. The rendering result is weakened, so that the display effect becomes lighter and thinner, and becomes more transparent in visual effect.
When a plurality of objects are overlapped front and back at a certain viewing angle, that is, from a certain viewing angle, all or part of the front and back objects block all or part of the back objects, whether the plurality of objects are rendered by only one display configuration or the plurality of objects are rendered by different display configurations, if the opacity of the rendering result is not reduced, either the front object blocks the back objects, so that the back objects are not displayed, or only the interested object selected by the doctor is rendered and displayed, and the parts of the other objects which are overlapped with the interested object are not rendered and displayed, and the doctor can select one or more of the objects as the interested object according to the needs. As shown in fig. 3, the fetal portion 313a is positioned in front of the fetal face 313b, the fetal portion 313a and the fetal face 313b are rendered in the same display configuration, and the opacity of the rendering parameters is 1, so that a partial area of the fetal face 313b is blocked by the fetal portion 313a in the rendering result, in which case the doctor cannot see the complete rendered image of the fetal face 313 b.
When the rendering fusion scheme of the embodiment is adopted, the rendering result is multiplied by the fusion coefficient and then is displayed in a superimposed manner according to a certain proportion, so that different areas are displayed on the screen at the same time according to respective proportions, a partially transparent (also called semi-transparent) effect is obtained, each object in the overlapped part can be in a semi-transparent state, even if the object behind the object in the relative spatial position is not completely covered by the object in front, and therefore a user (such as a doctor) can visually see the expression of a plurality of objects at the same time. As shown in fig. 4, the placenta portion 313a and the fetal face 313b both exhibit a semitransparent rendering effect, and although the placenta portion 313a is located in front of the fetal face 313b, it does not completely cover the image of the fetal face at the back, and the doctor can visually see the detailed representation of the placenta portion 313a and the fetal face 313b at the overlapping portion at the same time.
When the adaptive fusion coefficients of the subsets are adopted as the fusion coefficients, the transparency displayed on different parts is different in visual effect, so that the doctor can intuitively feel the properties of the object, such as the thickness of the object, the density of the object and the like. In one example of this embodiment, the adaptive fusion coefficients for each subset may be calculated using ray tracing, which is based on tracing the ray in the opposite direction from the point of view, which is equivalent to tracing the ray from the eye, and calculating the reflection, refraction, and absorption of the ray as it intersects an object or medium in the scene. In this embodiment, a flow of calculating the adaptive fusion coefficients of each subset by using a ray tracing method is shown in fig. 5, and includes the following steps:
and step 131, calculating the voxel value of each subset on each tracking ray by adopting a ray tracking method. As shown in fig. 6, a plurality of tracing rays are emitted from a viewpoint 210, the tracing rays are simulated rays, the tracing rays are respectively passed through a three-dimensional image of each object to be displayed, each tracing ray is incident into the three-dimensional image according to a viewing angle, some tracing rays pass through a plurality of objects, and some tracing rays pass through only one object. The following description is given by taking the example of tracing ray 220 through first object 230 and second object 240 in fig. 4. Clinically, for example, where the second object 240 is a fetal face, the first object 230 may be a placental portion that is occluded in front of the fetal face. Firstly, the passing voxels when the tracing ray 220 passes through the first object 230 and the second object 240 are identified, and the voxel values of the passing voxels are obtained according to multi-frame ultrasonic echo data, and the voxel values reflect the reflection intensity of the ultrasonic waves at the position. For liquid (such as blood), the reflection of ultrasonic waves is weak, the echo signals of the ultrasonic waves are weak, and the obtained voxel values are small; for a solid with a high density (such as a bone or an intervening object), the boundary between the solid and other soft tissues has strong reflection of ultrasonic waves, the echo signal of the ultrasonic waves is strong, and the obtained voxel value is large.
Then, the subsets to which the via voxels belong are determined, the voxel value integration of the via voxels belonging to the subset of the first object 230, i.e. the voxel values of the subset of the first object 230 on the tracing ray 220, is calculated, and the voxel value integration of the via voxels belonging to the subset of the second object 240, i.e. the voxel values of the subset of the second object 240 on the tracing ray 220, is calculated.
Step 132, the spatial distribution of each subset on the tracing ray is obtained. The area of each subset distributed on the tracing ray is obtained according to the subsets to which the via voxels belong, that is, the thickness distribution of each subset on the tracing ray.
The spatial location of each subset in the direction of ray incidence is identified, step 133. As shown in FIG. 6, a tracing ray 220 is issued from a viewpoint 210, first passing through a first object 230 and then passing through a second object 240, such that the first subset of objects 230 is a spatially forward subset and the second subset of objects 240 is a spatially backward subset with respect to the second subset of objects 240.
Step 134, determine the fusion coefficient of each subset on each tracing ray, the fusion coefficient being a value between 0 and 1. The fusion coefficient of each subset on the tracing ray may be determined according to at least one of a voxel value, a spatial distribution, and a spatial position of each subset on the tracing ray, and the fusion coefficient may be determined according to at least one rule of the following manners:
in the incident direction of the tracing ray, the subset at the front position in space has a larger fusion coefficient relative to the subset at the back position in space, which means that the object at the front position is more opaque than the object at the back position, so that the user can judge the relative position of the object in space according to the opacity of the object.
On this tracking ray, the subset with larger voxel values has a larger fusion coefficient than the subset with smaller voxel values, which means that the denser solids (e.g. bones) are more opaque than the muscles and liquids, so that the user can determine what tissue the object is according to the opacity of the object.
On the tracking ray, the subset with the larger voxel distribution range has a larger fusion coefficient relative to the subset with the smaller voxel distribution range, which means that the part with the larger thickness is more opaque, so that the user can judge the thickness of the object at the position according to the opacity of the object.
According to rules predetermined by the system or rules set by the user, the method in this step may determine the fusion coefficient of each subset on the tracing ray 220, which belongs to an adaptive fusion coefficient because the fusion coefficient varies with the thickness, density and/or spatial position of the subset, and so on, the fusion coefficient of each subset on the tracing ray of each view angle may be obtained. In step 14, the rendering result of each subset on the tracing ray is multiplied by the fusion coefficient of each subset on the tracing ray for superposition display.
In the above steps, it should be understood by those skilled in the art that the timing sequence of step 13 may also be exchanged with that of step 12, that is, the fusion coefficients of each subset are obtained first, and then each subset is rendered distinctively. Step 14 may be performed after rendering each subset, or may be performed during rendering each subset, that is, performing rendering while performing blending.
In some cases, when identifying each subset from the volume data set according to different objects in step 11, sometimes a part of the volume data is identified to both the first subset and the second subset, and the volume data identified to at least two subsets is referred to herein as common volume data, in which case, for the common volume data, rendering is performed according to any display configuration used by its belonging subset, or rendering is performed with a display configuration different from each related subset or group, and similarly, a fusion coefficient of the part of the common volume data is calculated, and finally, the rendering result is multiplied by the fusion coefficient and displayed in an overlapping manner with other objects.
Example two:
on the basis of the first embodiment, the processor is further configured to adjust the volume data of the boundary portion of the adjacent subset from the originally attributed subset to the adjacent subset, and render the adjusted volume data using the display configuration of the adjacent subset to adjust the displayed object or area. Ways of adjustment include, but are not limited to:
1. selecting different partition strategies/algorithms to re-partition the region or object, such as re-partitioning the region or object using different models according to different segmentation objectives or clinical meanings;
2. using a cropping operation or applying additional algorithms to divide an existing demarcated area or object into a plurality of smaller regions of interest; or removing a part of the existing divided region or object and reducing the range of the region of interest;
3. merging the existing divided areas; or expanding the range of the region of interest to add part of the region of non-interest into the region of interest;
4. integrally adjusting the division surfaces between the divided regions, such as integrally translating and rotating the division surfaces, adjusting the positions of control points on the division surfaces or parameters of a division surface equation to change the shapes and the positions of the division surfaces, or replacing a mathematical model of the division surfaces, and the like;
5. locally adjusting the division surfaces between the divided areas or the objects, such as using a tool like a painting brush or an eraser or moving part of control points on the division surfaces to locally increase/decrease the range of a certain divided area; at the same time, other divided regions or objects may also have their range reduced/increased accordingly.
In some cases, when each subset is recognized from the volume data set according to different objects in step 11, a part of the volume data may be misrecognized, and therefore, a doctor may adjust the subset to which the part of the volume data belongs according to clinical experience, and in this case, the part of the volume data needs to be adjusted by a fine adjustment tool such as a brush or an eraser. The adjustment operation of the user can input an instruction through the control terminal so as to achieve the purpose of adjusting the output image on the display. As shown in fig. 7, a schematic diagram of a control terminal is shown, in which a human-computer interaction device 300 includes a display 310, a control panel 320 and a touch screen 330, the control panel 320 and the touch screen 330 form the control terminal, and in some embodiments, the touch screen may not be provided.
As shown in fig. 7, the display 310 includes a display area 311 on which various images, such as a two-dimensional ultrasound image 312 and a three-dimensional ultrasound image 313, can be displayed.
The touch screen 330 includes a plurality of operable icons 331 thereon corresponding to a plurality of different functions.
The control panel 320 may be provided with various operation keys, such as a keyboard 321 operated by pressing, a knob 322, a gain key 327, and a scroll wheel (or a track ball) 328. For a three-dimensional ultrasound image in which a plurality of rendering objects are displayed, each rendering object may correspond to one operation key, as shown in fig. 8, a plurality of rendering objects are displayed in the three-dimensional ultrasound image 313, for example, placenta portion 313a and fetal face 313b, placenta portion 313a corresponding to knob 322, fetal face 313b corresponding to knob 323, i.e., when knob 322 is operated by the user, is deemed to be the selection and operation of placenta portion 313a, when the user operates knob 323, which is considered to be the selection and operation of fetal face 313b, knob 324 may be a multi-position switch, for selecting operation types including cut, merge, tile adjustment, brush, eraser, etc., such as when knob 324 selects a brush, a painting operation can be performed on the image in the display area 311 by operating the wheel 328, when the eraser is selected by the knob 324, a wiping operation can be performed on the image in the display region 311 by operating the wheel 328. The following describes an operation flow of the wiping operation as an example, and as shown in fig. 9, the operation flow includes the following steps:
step 20, detecting the adjustment operation selected by the user. The user selects the wipe operation via the control panel upper knob 324.
Step 21 detects the subset selected by the user and the area selected on the superimposed display image. The user controls the mouse movement of the display area through the wheel 328, and may control the mouse movement to an area where the placenta portion 313a and the fetal face 313b overlap, and when the user is detected to rotate the knob 322, the user-selected data is considered as the volume data attributed to the subset of the placenta portion 313a, and when the user is detected to rotate the knob 323, the user-selected data is considered as the volume data attributed to the subset of the fetal face 313 b.
Step 22, determining the adjusted volume data according to the subset and the area selected by the user. The area selected by the user can be determined by a circle 313c, the circle 313c represents a round sphere with the radius of the circle as the radius, that is, the three-dimensional volume data in the round sphere is the adjusted volume data, and the user can change the radius of the circle 313c by rotating the knobs 322 and 323, that is, the radius of the round sphere, thereby changing the range of the adjusted volume data.
And step 23, responding to the imitated wiping operation input by the user, and adjusting the adjusted volume data from the originally attributed subset to the adjacent subset. When the radius of the circle 313c is determined, the user operates the wheel 328, and based on the user operating the wheel 328, the icon on the display screen may be changed to an eraser graphic, and the adjusted volume data is adjusted from the originally attributed subset to the adjacent subset as the user operates the wheel 328 to wipe back and forth. When the user needs to continue the adjustment, the scroll wheel 328 may continue to be operated, and the system again determines the adjusted volume data from circle 313c and adjusts the adjusted volume data from the originally attributed subset to the adjacent subset. If the fetal face is used as a reference object, when the user selects an eraser to adjust the volume data from the subset of fetal faces 313b to the subset of placenta portions 313a, it is called "erase", and when the volume data is adjusted from the subset of placenta portions 313a to the subset of fetal faces 313b, it is called "anti-erase".
And 24, rendering the adjusted volume data by using the display configuration of the adjacent subset, so that the adjusted volume data and the newly-attributed subset have the same rendering effect.
In this embodiment, before the volume data is adjusted, a plurality of different objects are rendered distinctively and displayed in a fusion manner, so that a user can check the current situation of the adjusted volume data, and can assist the user in determining whether the determined adjusted volume data exists and whether the adjustment operation is correct. For example, for the face image of the fetus, the doctor judges that the image of the nose part of the fetus is defective according to the clinical experience, such defects may be misidentification of the ultrasound device in identifying the subsets, or may be defects in the fetus that a nose defect does exist, if the rendered display of other objects, such as the placenta, overlapping the nose, can be seen at the same time, it can be empirically determined whether there is a misidentification of the volumetric data, for example, misrecognizing the volume data that should be attributed to the face subset of the fetus to the placenta subset, in which case, the doctor can adjust the part of the volume data from the placenta subset to the face subset of the fetus, so that the image of the nose part of the fetus becomes complete, and similarly, if there is no false recognition of the volume data, for example, there is no volume data in the placenta image, the doctor can judge that the fetus does have the defect of nose defect. Therefore, the present embodiment can improve the accuracy of the later diagnosis.
In another embodiment, the effects of "erasing" and "anti-erasing" can be achieved by another scheme, for example, when the fetal face is used as a reference object, the fetal face corresponds to the knob 323, and when the user operates the knob 323, the fetal face 313b is considered to be selected and operated, and the operation of "erasing" the fetal face includes the following steps:
1.1 receiving a second instruction input by a user on the three-dimensional image. For example, the user may select the knob 324 to the "erase" position by operating the knob 324, which operation by the user is considered as the second instruction input by the user.
1.2 according to the second instruction, identifying the first position on the three-dimensional image corresponding to the input of the user and the subset where the first position is located. The user moves the cursor to a position where the cursor needs to be erased, when the user selects the knob 324 to the erase position, the icon of the cursor can be changed to a corresponding icon, for example, the icon of the cursor is changed into a circular icon, and the user can change the size of the circular icon by rotating the knob 323, so as to determine the size of the area covered by the first position selected by the user. Since the user is operating knob 323, the user may be considered to be performing a "wipe" operation on the fetal face, and the subset in which the first position is located is a subset of the fetal face.
1.3 determining, from the first location, the volumetric data comprised in the first location in the subset in which the first location is located. In the case of the cursor of the circular icon, the volume data included in the first position is volume data of the face of the fetus located within a sphere having the radius of the circular icon as the radius.
1.4, the fusion coefficient of the subimage corresponding to the subset where the first position is located is reduced, in this embodiment, the fusion coefficient of the subimage of the fetal face is reduced, so that the fetal face image looks more transparent, and the effect that the fetal face image is "erased" is achieved.
In another embodiment, the fusion coefficient of the sub-images corresponding to the volume data included in the first position may also be reduced, that is, the fusion coefficient of the image of the area covered by the first position in the fetal face image is reduced, so that the image of the area covered by the first position appears more transparent, thereby achieving the effect that the part on the fetal face image is "erased".
When the 'anti-erasing' operation is performed on the face of the fetus, the following process is included:
2.1 receiving a third instruction input by the user on the three-dimensional image. For example, the user may select the knob 324 to the "anti-erase" position by operating the knob 324, which operation by the user is considered to be the user inputting the third command.
2.2 according to the third instruction, identifying a second position on the three-dimensional image corresponding to the input of the user and the subset where the second position is located. Similarly, the user may move the cursor to a location where "anti-wipe" is desired by moving the cursor, and the user may change the size of the circular icon by rotating the knob 323, thereby determining the size of the area covered by the second location selected by the user. Because the user is operating knob 323, the user may be considered to be performing an "anti-wipe" operation on the fetal face, and the subset in which the second position is located is a fetal face subset.
2.3 determining the volume data included in the second position in the subset in which the second position is located according to the second position; in the case of the cursor of the circular icon, the volume data included in the second position is volume data of the face of the fetus located within a sphere having the radius of the circular icon as the radius.
And 2.4, improving the fusion coefficient of the sub-image corresponding to the subset where the second position is located, or improving the fusion coefficient of the sub-image corresponding to the volume data included in the second position. Namely, the fusion coefficient of the sub images of the face of the fetus is improved, so that the image of the face of the fetus looks more opaque, or the fusion coefficient of the image of the area covered by the second position in the image of the face of the fetus is improved, so that the image of the area covered by the second position looks more opaque, thereby realizing the effect that the image of the face of the fetus is 'erased reversely'.
Example three:
when the three-dimensional image is rotated to a certain angle or the three-dimensional image is observed from a certain view angle, the face of the fetus is sometimes shielded by other structures, and at this time, it is desirable to remove the shielding object shielding the face of the fetus by performing one operation. In this embodiment, as shown in fig. 7, a control key (for example, a button 329) is provided on the control panel, and a command for removing the blocking object by one key can be input by a user by pressing the button 329 when the face of the fetus is completely or partially blocked, corresponding to the function of removing the blocking object by one key. In one embodiment, the process flow of removing the obstruction by one key is shown in FIG. 10, and includes the following steps:
and step 30, acquiring ultrasonic three-dimensional volume data to obtain a volume data set.
And step 31, determining the depth of each voxel on the fetal face contour in the volume data set according to the fetal face features to form a depth change curved surface of the fetal face contour.
Step 32, the volume data set is segmented into at least two subsets based on the depth-varying surface, wherein one subset comprises three-dimensional volume data of the face of the fetus.
Step 33, rendering all or part of the plurality of subsets to obtain a plurality of sub-images.
And step 34, performing fusion display on all or part of the plurality of sub-images.
Step 35, receiving a first instruction generated by a user through a single operation. A button 329 is arranged on the control panel, a function of removing the shielding object is realized by corresponding to one key, and when the face of the fetus is completely or partially shielded, a user can input a first instruction by pressing the button.
And step 36, according to the first instruction, reducing the fusion coefficient of the sub-images corresponding to the other subsets except the subset containing the fetal face, so that the display effect of the sub-images corresponding to the other subsets is more transparent, and the fetal face image is more prominent.
The embodiment can remove the sheltering object on the face of the fetus through a single operation, thereby simplifying the operation of a user (such as a doctor) to the maximum extent.
In this embodiment, the depth-varying curved surface is used in steps 31-32 to distinguish the fetal face subset from the subsets of other structures, and it should be understood by those skilled in the art that in other embodiments, the fetal face subset and the subsets of other structures may be identified by other identification methods, so that the fusion coefficient of the sub-images corresponding to the subsets other than the fetal face subset may be reduced in the subsequent step 36.
Example four:
the present embodiment further provides a three-dimensional ultrasound image display system, as shown in fig. 11, the three-dimensional ultrasound image display system includes an obtaining unit 410, a recognition unit 420, a rendering unit 430, and a fusion unit 440.
The acquisition unit 410 is used to acquire ultrasound three-dimensional volume data and obtain a volume data set. In a specific embodiment, the obtaining unit 410 is responsible for acquiring ultrasound three-dimensional volume data, and the volume data acquisition in a three-dimensional space is realized by acquiring two-dimensional images in a series of scanning planes and integrating the two-dimensional images according to the three-dimensional spatial relationship. The selection and control of the scanning plane in which the two-dimensional image is acquired can be achieved by a volume probe or an area array probe. The volume probe is internally composed of a conventional one-dimensional probe array element sequence and a built-in stepping motor for driving the array element sequence to swing. The stepping motor can make the scanning plane of the array element sequence swing back and forth along the normal direction of the array element sequence, and the scanning of the three-dimensional space is realized. The area array probe is provided with thousands of array elements which are arranged in a matrix form, and can directly transmit and receive data to different directions of a three-dimensional space, so that rapid volume data acquisition is realized. And (3) reconstructing the two-dimensional image acquired by each scanning plane according to the spatial relationship of the two-dimensional image, performing coordinate conversion according to the spatial position of each plane, and performing interpolation to obtain the voxel value of each point in the three-dimensional volume data. The three-dimensional volume data output by the obtaining unit 410 may be further subjected to a series of processing such as smoothing and denoising.
The identification unit 420 divides the input three-dimensional volume data into a plurality of regions, and in a specific embodiment, the identification unit 420 is configured to identify a plurality of subsets from the volume data set according to different objects, wherein the subsets include at least one object of interest. There may be overlapping portions in the spatial positions of the region division results output from the recognition unit 420. More particularly, some of the regions may be completely contained within some other region (e.g., the heart region is contained within the torso region). The identification unit 420 partitions the region according to geometric shape, and also according to object or tissue structure, including but not limited to:
1. cutting volume data, such as cutting based on geometric shapes such as planes, cuboids, spherical surfaces, ellipsoidal surfaces and the like;
2. identification of key sites or structures in the volumetric data, such as identification of adult endocardium or ventricle/atrium, fetal face identification, identification of endometrium or uterine adnexa, etc.;
3. and (4) segmenting volume data, such as segmenting the fetal volume data into a fetal region and an amniotic fluid region and the like.
The rendering unit 430 is configured to render part or all of the plurality of subsets to obtain a plurality of sub-images, and in a specific embodiment, the rendering unit 430 is configured to establish a plurality of different sets of display configurations and to render the plurality of subsets distinctively using the plurality of sets of display configurations.
The fusion unit 440 is configured to fuse part or all of the plurality of sub-images rendered by the rendering unit 430, so as to obtain a three-dimensional image displayed in a superimposed manner. In a specific embodiment, the fusion unit 440 obtains the fusion coefficient of each subset, and performs overlay display after multiplying the rendering results of the plurality of subsets by the respective fusion coefficients, so as to obtain a final display image. Ways of fusion include, but are not limited to:
1. the rendering results of the areas are directly superposed according to a certain fusion proportion, and a user can use a preset fusion proportion combination or self-designate the fusion proportion of each area;
2. the rendering results of the regions are superposed according to a self-adaptive fusion proportion, the fusion proportion can be calculated according to the spatial position relation, the voxel value distribution, the preset weight, the preset fusion rule and the like of the regions, and a user can also change the calculation mode of the self-adaptive fusion proportion (note that in the self-adaptive fusion proportion, different voxels in the same divided region have different proportionality coefficients);
3. according to the designation of a user, each area is displayed according to a certain front-back sequence, and rendering results positioned at the back are shielded by results positioned at the front (the user can also dynamically adjust the front-back sequence of each area);
4. displaying according to the front-back position relation of different areas in the space, wherein the rendering result positioned at the back part is shielded by the result positioned at the front part;
5. according to other principles (such as clinical meanings of the divided areas), selecting a front-back sequence of the divided areas to display, wherein rendering results positioned at the back part are blocked by results positioned at the front part;
6. combinations of the above. The user can select any one of the above modes for fusion, and different spatial positions can be fused in different modes.
In the above fusion manners, the 1 st fusion manner to the 2 nd fusion manner will make each divided region displayed in a semi-transparent form in the final fusion result at the same time, while the 3 rd fusion manner to the 5 th fusion manner has no semi-transparent display, which may make the partial rendering result of some divided regions blocked by other divided regions. The 3 rd to 5 th fusion modes can also be understood as that at a certain spatial position, the fusion coefficient of one divided region is 1, and the fusion coefficients of other divided regions which are blocked by the fusion coefficient are 0. Regardless of the fusion mode, the final display image can simultaneously display the detailed expressions of different divided regions.
After the different display configurations are used for carrying out the distinguishing display on the divided areas or the objects, the user can visually see the dividing condition of each area so as to adjust the area division subsequently, and the simultaneous display of the divided areas is beneficial for the user to simultaneously observe the details of each area so as to better master the overall expression of the volume data.
In an improved embodiment, the three-dimensional ultrasound image display system further comprises an editing unit 450 and a setting unit 460. The editing unit 450 is responsible for adjustment of the division areas, for example, for adjusting the volume data of the boundary portion of the adjacent subset from the originally attributed subset to the adjacent subset, and rendering the adjusted volume data using the display configuration of the adjacent subset. The user can adjust the result of the region division wholly or partially according to the needs, and the adjustment mode includes but is not limited to:
1. selecting different partitioning strategies/algorithms to re-partition the region, such as re-partitioning the region by using different models according to different partitioning targets or clinical meanings;
2. dividing the existing divided area into a plurality of smaller interested areas by using a clipping operation or applying an additional algorithm; or removing a part of the existing divided region and reducing the range of the region of interest;
3. merging the existing divided areas; or expanding the range of the region of interest to add part of the region of non-interest into the region of interest;
4. integrally adjusting the division surfaces between the divided regions, such as integrally translating and rotating the division surfaces, adjusting the positions of control points on the division surfaces or parameters of a division surface equation to change the shapes and the positions of the division surfaces, or replacing a mathematical model of the division surfaces, and the like;
5. locally adjusting the division surfaces between the divided areas, such as using tools such as a painting brush and an eraser or moving part of control points on the division surfaces to locally increase/decrease the range of a certain divided area; at the same time, other divided regions may also have their range reduced/increased accordingly.
In any way, the editing unit 450 requires the user to interactively adjust the divided regions according to the needs and objectives of the user in cooperation with the rendering unit 430 and the fusion unit 440. Since the rendering unit 430 and the fusing unit 440 can simultaneously display the range and the details of each divided region, it becomes intuitive and convenient to interactively adjust the divided range of each region.
The setting unit 460 is responsible for adjusting the display effect of each divided region, for example, for setting at least one of the subsets displayed on the final display interface, the display configuration of each subset, the fusion coefficient of each subset, and the calculation manner of the fusion coefficient. The range that the setting unit 460 can set includes, but is not limited to:
1. whether each divided region is displayed in the final fusion result;
2. the fusion coefficient of each divided region, or the calculation mode of the fusion coefficient, the required parameters and the like;
3. display arrangement of each divided region.
In any of the above manners, the setting unit 460 requires the user to interactively adjust the display effect according to the needs and the objectives of the user in cooperation with the rendering unit 430 and the fusion unit 440. The adjustment of the display effect may be performed separately for each divided region, or may be performed simultaneously for a plurality of divided regions or all of the divided regions. Since the rendering unit 430 and the fusion unit 440 can display detailed representations of the divided regions, it becomes intuitive and convenient to interactively adjust the display effect of the regions.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (42)

1. A method of displaying a three-dimensional ultrasound image, comprising:
acquiring ultrasonic three-dimensional volume data to obtain a volume data set;
identifying a plurality of subsets from the volume data set;
establishing a plurality of different sets of display configurations;
differentially rendering the plurality of subsets using a plurality of sets of display configurations;
acquiring a fusion coefficient of each subset, wherein the fusion coefficient is a value between 0 and 1;
and multiplying the rendering results of the plurality of subsets by the respective fusion coefficients, so that the plurality of subsets are displayed in a semi-transparent mode in an overlapping mode.
2. A method of displaying a three-dimensional ultrasound image, comprising:
establishing a plurality of different sets of display configurations;
differentially rendering a plurality of subsets of the ultrasound three-dimensional volume data set using a plurality of sets of display configurations;
acquiring a fusion coefficient of each subset, wherein the fusion coefficient is a value between 0 and 1;
and multiplying the rendering results of the plurality of subsets by the respective fusion coefficients, so that the plurality of subsets are displayed in a semi-transparent mode in an overlapping mode.
3. The method according to claim 1 or 2, wherein the fusion coefficient is a preset fusion coefficient or an adaptive fusion coefficient calculated according to a fusion rule.
4. The method of claim 3, wherein obtaining the fusion coefficients for each subset comprises:
calculating the voxel value of each subset on each tracking ray by adopting a ray tracking method;
determining a fusion coefficient of each subset on each tracking ray according to the voxel value and the spatial distribution of each subset on each tracking ray;
or obtaining the fusion coefficient of each subset comprises:
calculating the voxel value of each subset on each tracking ray by adopting a ray tracking method;
identifying the spatial position of each subset in the incident direction of the tracking ray;
and determining the fusion coefficient of each subset on each tracing ray according to the voxel value, the spatial distribution and the spatial position of each subset on each tracing ray.
5. The method of claim 4, wherein the fusion coefficients are determined according to at least one of the following:
in the tracking ray incidence direction, the spatially forward subset has a larger fusion coefficient with respect to the spatially backward subset;
on the tracking ray, the subset with larger voxel value has larger fusion coefficient relative to the subset with smaller voxel value;
on the tracking ray, the subset with the larger voxel distribution range has a larger fusion coefficient relative to the subset with the smaller voxel distribution range.
6. The method of claim 4, wherein the rendering results of the plurality of subsets are multiplied by respective fusion coefficients for overlay display, comprising: and multiplying the rendering result of each subset on the tracing ray by the fusion coefficient of each subset on the tracing ray, and then superposing.
7. The method of claim 1 or 2, further comprising determining whether there is common volumetric data in the volumetric data set that is identified in at least two of the subsets; and rendering the common volume data according to any display configuration used by the attributive subset of the common volume data, or rendering the common volume data by adopting a display configuration which is different from the display configuration used by the attributive subset of the common volume data.
8. The method of claim 1 or 2, wherein differentially rendering the plurality of subsets using the plurality of sets of display configurations comprises: each subset is rendered using a different set of display configurations than the other subsets, or the subsets are divided into at least two groups, each group being rendered using a different set of display configurations than the other groups.
9. The method of claim 1 or 2, further comprising:
adjusting the volume data of the boundary part of the adjacent subsets from the original attributed subsets to the adjacent subsets;
rendering the adjusted volume data using the display configuration of the adjacent subset.
10. The method of claim 9, wherein adjusting the volume data of the boundary portion of the neighboring subset from the originally attributed subset to the neighboring subset comprises:
detecting the subset selected by the user and the selected area on the superimposed display image;
determining adjusted volume data according to the subset and the area selected by the user;
and adjusting the adjusted volume data from the originally attributed subset to the adjacent subset in response to the operation of imitating wiping or painting input by the user.
11. A method of displaying a three-dimensional ultrasound image, comprising:
acquiring ultrasonic three-dimensional volume data aiming at fetal detection to obtain a volume data set;
identifying a plurality of subsets from the volume data set based on image characteristics of the fetus;
rendering part or all of the plurality of subsets to obtain a plurality of sub-images;
fusing part or all of the plurality of sub-images according to a fusion coefficient to obtain a three-dimensional image, wherein the fusion coefficient is a value between 0 and 1, and the plurality of sub-images in the three-dimensional image are superposed in a semitransparent mode; and the combination of (a) and (b),
and displaying the three-dimensional image.
12. The method of claim 11, wherein the rendering some or all of the plurality of subsets to obtain a plurality of sub-images comprises: rendering some or all of the plurality of subsets based on different display configurations obtains a plurality of sub-images.
13. The method of claim 11, wherein the fusion coefficients are pre-set or adaptively calculated.
14. The method of claim 13, wherein fusing the portions or all of the plurality of sub-images to obtain the three-dimensional image further comprises setting different fusion coefficients for the portions or all of the plurality of sub-images by one of:
setting an observation viewpoint to obtain one or more tracking rays, calculating a voxel value of each subset on each tracking ray, and determining a fusion coefficient of each subset on each tracking ray according to the voxel value and spatial distribution of each subset on each tracking ray; and the combination of (a) and (b),
setting an observation viewpoint to obtain one or more tracing rays, calculating a voxel value of each subset on each tracing ray, identifying a spatial position of each subset in the incident direction of the tracing ray, and determining a fusion coefficient of each subset on each tracing ray according to the voxel value, the spatial distribution and the spatial position of each subset on each tracing ray.
15. The method of claim 11, wherein the image features of the fetus comprise at least fetal facial features, at least one of the plurality of subsets being used to generate a fetal facial sub-image.
16. The method of claim 15, wherein the fetal facial features comprise: image characteristics corresponding to anatomical structures of one or more tissue structures on the face of the fetus selected from the group consisting of fetal eyes, fetal nose, fetal forehead, fetal chin, fetal cheek, fetal ear, fetal face contour, and fetal mouth in the ultrasound three-dimensional volume data.
17. The method of claim 15, wherein identifying a plurality of subsets from the volume data set based on image characteristics of the fetus comprises:
according to the fetal facial features, determining the depth of each voxel on the fetal facial contour in the volume data set to form a depth change curved surface of the fetal facial contour;
the volume data set is partitioned into at least two subsets based on the depth-varying surface, wherein one subset includes three-dimensional volume data of a fetal face.
18. The method of claim 15, wherein the method further comprises:
receiving a first instruction generated by a user through a single operation;
and according to the first instruction, reducing the fusion coefficient of the sub-images corresponding to the other sub-sets except the sub-set containing the face of the fetus.
19. The method of claim 11, wherein the method further comprises:
receiving a second instruction input by a user on the three-dimensional image;
according to the second instruction, identifying a first position on the three-dimensional image corresponding to the input of the user and a subset where the first position is located;
according to the first position, determining volume data included in the first position in the subset of the first position;
and reducing the fusion coefficient of the sub-image corresponding to the subset where the first position is located, or reducing the fusion coefficient of the sub-image corresponding to the volume data included in the first position.
20. The method of claim 11, wherein the method further comprises:
receiving a third instruction input by a user on the three-dimensional image;
according to the third instruction, identifying a second position on the three-dimensional image corresponding to the input of the user and a subset where the second position is located;
according to the second position, determining volume data included in the second position in the subset in which the second position is located;
and improving the fusion coefficient of the sub-image corresponding to the subset where the second position is located, or improving the fusion coefficient of the sub-image corresponding to the volume data included in the second position.
21. An ultrasound apparatus characterized by comprising:
the ultrasonic probe is used for transmitting ultrasonic waves to a region of interest in biological tissue and receiving echoes of the ultrasonic waves;
the transmitting/receiving sequence controller is used for generating a transmitting sequence and/or a receiving sequence, outputting the transmitting sequence and/or the receiving sequence to the ultrasonic probe, and controlling the ultrasonic probe to transmit ultrasonic waves to the region of interest and receive echoes of the ultrasonic waves;
the processor is used for generating ultrasonic three-dimensional volume data according to the ultrasonic echo data to obtain a volume data set, identifying a plurality of subsets from the volume data set, establishing a plurality of sets of different display configurations, performing distinctive rendering on the plurality of subsets by using the plurality of sets of display configurations to obtain a fusion coefficient of each subset, wherein the fusion coefficient is a value between 0 and 1, and after the rendering results of the plurality of subsets are multiplied by the respective fusion coefficients, performing superposition display on the plurality of subsets in a semitransparent mode;
the human-computer interaction device comprises a display used for displaying the ultrasonic rendering image.
22. The ultrasound device according to claim 21, wherein the fusion coefficient is a preset fusion coefficient or an adaptive fusion coefficient calculated according to a fusion rule.
23. The ultrasound apparatus of claim 22, wherein the processor determines the blending coefficient for each subset on the tracing ray based on at least one of a spatial location, a voxel value, and a spatial distribution of each subset on the tracing ray.
24. The ultrasound device of claim 23, wherein the fusion coefficient is determined according to at least one of the following:
in the tracking ray incidence direction, the spatially forward subset has a larger fusion coefficient with respect to the spatially backward subset;
on the tracking ray, the subset with larger voxel value has larger fusion coefficient relative to the subset with smaller voxel value;
on the tracking ray, the subset with the larger voxel distribution range has a larger fusion coefficient relative to the subset with the smaller voxel distribution range.
25. The ultrasound device of claim 21, wherein the processor reduces the opacity of the rendering results of the subsets by a fusion coefficient.
26. The ultrasound device of claim 21, further comprising the processor determining whether common volumetric data identified in the at least two subsets is present in the volumetric data set, rendering the common volumetric data in any one of the display configurations used in its home subset, or in a display configuration that is different from the display configuration used in its home subset.
27. The ultrasound device of claim 21, wherein the processor, in differentially rendering the plurality of subsets using the plurality of sets of display configurations, renders each subset using a different set of display configurations than the other subsets, or groups the plurality of subsets into at least two groups, each group being rendered using a different set of display configurations than the other groups.
28. The ultrasound device of any of claims 21 to 27, wherein the processor adjusts the volume data of the border portion of the adjacent subset from the originally attributed subset to the adjacent subset and renders the adjusted volume data using the display configuration of the adjacent subset.
29. The ultrasound device of claim 28, wherein the human-computer interaction device further comprises a control panel, the control panel having a first control disposed thereon, the processor detecting the first control selected by the user and the selected area on the superimposed display image, determining the adjusted volume data based on the subset and the area selected by the user, and adjusting the adjusted volume data from the originally attributed subset to an adjacent subset in response to a wipe or paint mimicking operation input by the user.
30. An ultrasound apparatus characterized by comprising:
a memory for storing a program;
a processor for implementing the method of any one of claims 1-20 by executing a program stored by the memory.
31. A computer-readable storage medium, comprising a program executable by a processor to implement the method of any one of claims 1-20.
32. A three-dimensional ultrasound image display system, comprising:
an acquisition unit for acquiring ultrasound three-dimensional volume data and obtaining a volume data set;
an identification unit for identifying a plurality of subsets from the volume data set;
a rendering unit for establishing a plurality of sets of different display configurations and performing a distinctive rendering of the plurality of subsets using the plurality of sets of display configurations;
and the fusion unit is used for acquiring the fusion coefficient of each subset, wherein the fusion coefficient is a value between 0 and 1, and multiplying the rendering results of the plurality of subsets by the respective fusion coefficient to enable the plurality of subsets to be displayed in a semi-transparent mode in an overlapping mode.
33. The system of claim 32, further comprising an editing unit for adjusting the volume data of the boundary portion of the adjacent subset from the originally attributed subset to the adjacent subset, and rendering the adjusted volume data using the display configuration of the adjacent subset.
34. The system according to claim 32, further comprising a setting unit for setting at least one of the subsets displayed on the final display interface, a display configuration of each subset, a fusion coefficient of each subset, and a calculation manner of the fusion coefficient.
35. An ultrasound apparatus characterized by comprising:
the ultrasonic probe is used for transmitting ultrasonic waves to a region of interest in biological tissue and receiving echoes of the ultrasonic waves;
the transmitting/receiving sequence controller is used for generating a transmitting sequence and/or a receiving sequence, outputting the transmitting sequence and/or the receiving sequence to the ultrasonic probe, and controlling the ultrasonic probe to transmit ultrasonic waves to the region of interest and receive echoes of the ultrasonic waves;
the processor is used for acquiring ultrasonic three-dimensional volume data for fetal detection to obtain a volume data set, identifying a plurality of subsets from the volume data set according to image characteristics of a fetus, rendering part or all of the plurality of subsets to obtain a plurality of sub-images, fusing part or all of the plurality of sub-images according to a fusion coefficient to obtain a three-dimensional image, wherein the fusion coefficient is a value between 0 and 1, and the plurality of sub-images in the three-dimensional image are overlapped in a semitransparent mode and output to the display to be displayed;
the human-computer interaction device comprises a display for displaying the ultrasonic three-dimensional image.
36. The ultrasound device of claim 35, wherein the processor obtains a plurality of sub-images based on rendering some or all of the plurality of subsets in different display configurations.
37. The ultrasound device of claim 35, wherein the processor derives the fusion coefficients according to a preset or adaptive calculation.
38. The ultrasound apparatus of claim 35, wherein the fusion coefficients are set according to voxel values, spatial distribution and/or spatial location of each subset on the tracing ray.
39. The ultrasound device of claim 35, wherein the image features of the fetus comprise at least fetal facial features, at least one of the plurality of subsets is used to generate a fetal facial sub-image, and wherein the processor identifying the plurality of subsets from the volume data set based on the image features of the fetus comprises:
according to the fetal facial features, determining the depth of each voxel on the fetal facial contour in the volume data set to form a depth change curved surface of the fetal facial contour;
the volume data set is partitioned into at least two subsets based on the depth-varying surface, wherein one subset includes three-dimensional volume data of a fetal face.
40. The ultrasound device according to claim 35, wherein the human-computer interaction device further comprises a control panel, a second control key corresponding to the one-key barrier removing function is disposed on the control panel, the processor is further configured to receive a first instruction generated by a user through a single operation on the second control key, and according to the first instruction, reduce the fusion coefficient of the sub-images corresponding to the other subsets except the subset containing the face of the fetus.
41. The ultrasound device of claim 35, wherein the processor is further configured to receive a second instruction input by the user on the three-dimensional image, identify a first location on the three-dimensional image corresponding to the input by the user according to the second instruction, and decrease the fusion coefficient of the sub-images corresponding to the subset where the first location is located, or decrease the fusion coefficient of the sub-images corresponding to the volume data included in the first location.
42. The ultrasound device of claim 35, wherein the processor is further configured to receive a third instruction input by the user on the three-dimensional image, and according to the third instruction, identify a second location on the three-dimensional image corresponding to the input by the user, and increase the fusion coefficient of the sub-images corresponding to the subset where the second location is located, or increase the fusion coefficient of the sub-images corresponding to the volume data included in the second location.
CN201780079242.6A 2017-05-24 2017-05-24 Ultrasonic device and three-dimensional ultrasonic image display method thereof Active CN110087553B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/085736 WO2018214063A1 (en) 2017-05-24 2017-05-24 Ultrasonic device and three-dimensional ultrasonic image display method therefor

Publications (2)

Publication Number Publication Date
CN110087553A CN110087553A (en) 2019-08-02
CN110087553B true CN110087553B (en) 2022-04-26

Family

ID=64396181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780079242.6A Active CN110087553B (en) 2017-05-24 2017-05-24 Ultrasonic device and three-dimensional ultrasonic image display method thereof

Country Status (2)

Country Link
CN (1) CN110087553B (en)
WO (1) WO2018214063A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353328B (en) * 2018-12-20 2023-10-24 核动力运行研究所 Ultrasonic three-dimensional volume data online display and analysis method
CN110211216B (en) * 2019-06-14 2020-11-03 北京理工大学 Three-dimensional image spatial domain fusion method based on volume rendering opacity weighting
CN110223371B (en) * 2019-06-14 2020-12-01 北京理工大学 Shear wave transformation and volume rendering opacity weighted three-dimensional image fusion method
WO2021253293A1 (en) * 2020-06-17 2021-12-23 深圳迈瑞生物医疗电子股份有限公司 Contrast-enhanced ultrasound imaging method, ultrasound imaging device, and storage medium
CN112137693B (en) * 2020-09-08 2023-01-03 深圳蓝影医学科技股份有限公司 Imaging method and device for four-dimensional ultrasonic guided puncture
CN112767309A (en) * 2020-12-30 2021-05-07 无锡祥生医疗科技股份有限公司 Ultrasonic scanning method, ultrasonic equipment and system
CN113160241B (en) * 2021-03-18 2024-05-10 苏州云图健康科技有限公司 Volume rendering display method and device for medical image, storage medium and electronic device
CN116831626A (en) * 2022-03-25 2023-10-03 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic beam synthesis method and equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855655A (en) * 2012-08-03 2013-01-02 吉林禹硕动漫游戏科技股份有限公司 Parallel ray tracing rendering method based on GPU (Graphic Processing Unit)
CN103493125A (en) * 2011-02-28 2014-01-01 瓦里安医疗系统国际股份公司 Method and system for interactive control of window/level parameters of multi-image displays
CN103908299A (en) * 2012-12-31 2014-07-09 通用电气公司 Systems and methods for ultrasound image rendering
CN104050707A (en) * 2013-03-15 2014-09-17 想象技术有限公司 Rendering using point sampling and precalculated light transport information
CN104157004A (en) * 2014-04-30 2014-11-19 常州赞云软件科技有限公司 Method for computing radiosity lighting through fusion of GPU and CPU
WO2016054775A1 (en) * 2014-10-08 2016-04-14 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic virtual endoscopic imaging system and method, and apparatus thereof
CN106055188A (en) * 2015-04-03 2016-10-26 登塔尔图像科技公司 System and method for displaying volumetric images
CN106236133A (en) * 2015-06-12 2016-12-21 三星麦迪森株式会社 For the method and apparatus showing ultrasonoscopy
CN106663336A (en) * 2014-07-03 2017-05-10 索尼互动娱乐股份有限公司 Image generation device and image generation method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7691063B2 (en) * 2004-02-26 2010-04-06 Siemens Medical Solutions Usa, Inc. Receive circuit for minimizing channels in ultrasound imaging
JP2009011711A (en) * 2007-07-09 2009-01-22 Toshiba Corp Ultrasonic diagnosis apparatus
KR20120086585A (en) * 2011-01-26 2012-08-03 삼성메디슨 주식회사 Ultrasound system with opacity setting device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103493125A (en) * 2011-02-28 2014-01-01 瓦里安医疗系统国际股份公司 Method and system for interactive control of window/level parameters of multi-image displays
CN102855655A (en) * 2012-08-03 2013-01-02 吉林禹硕动漫游戏科技股份有限公司 Parallel ray tracing rendering method based on GPU (Graphic Processing Unit)
CN103908299A (en) * 2012-12-31 2014-07-09 通用电气公司 Systems and methods for ultrasound image rendering
CN104050707A (en) * 2013-03-15 2014-09-17 想象技术有限公司 Rendering using point sampling and precalculated light transport information
CN104157004A (en) * 2014-04-30 2014-11-19 常州赞云软件科技有限公司 Method for computing radiosity lighting through fusion of GPU and CPU
CN106663336A (en) * 2014-07-03 2017-05-10 索尼互动娱乐股份有限公司 Image generation device and image generation method
WO2016054775A1 (en) * 2014-10-08 2016-04-14 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic virtual endoscopic imaging system and method, and apparatus thereof
CN106055188A (en) * 2015-04-03 2016-10-26 登塔尔图像科技公司 System and method for displaying volumetric images
CN106236133A (en) * 2015-06-12 2016-12-21 三星麦迪森株式会社 For the method and apparatus showing ultrasonoscopy

Also Published As

Publication number Publication date
WO2018214063A1 (en) 2018-11-29
CN110087553A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110087553B (en) Ultrasonic device and three-dimensional ultrasonic image display method thereof
JP3483929B2 (en) 3D image generation method
JP4510817B2 (en) User control of 3D volume space crop
JP5495357B2 (en) Image display method and medical image diagnostic system
EP3510564B1 (en) Ray-tracing methods for realistic interactive ultrasound simulation
US7894663B2 (en) Method and system for multiple view volume rendering
US9301733B2 (en) Systems and methods for ultrasound image rendering
US6480732B1 (en) Medical image processing device for producing a composite image of the three-dimensional images
US20070046661A1 (en) Three or four-dimensional medical imaging navigation methods and systems
US9072470B2 (en) Ultrasound diagnosis apparatus, ultrasound image processing apparatus, image processing method, image display method, and computer program product
EP2016905B1 (en) Ultrasound diagnostic apparatus
KR101043331B1 (en) 3-dimension supersonic wave image user interface apparatus and method for displaying 3-dimension image at multiplex view point of ultrasonography system by real time
US11521363B2 (en) Ultrasonic device, and method and system for transforming display of three-dimensional ultrasonic image thereof
CN109937435B (en) System and method for simulated light source positioning in rendered images
JP2007135843A (en) Image processor, image processing program and image processing method
JP2006527054A (en) User interface for 3D color ultrasound imaging system
CN111836584B (en) Ultrasound contrast imaging method, ultrasound imaging apparatus, and storage medium
JP2007512064A (en) Method for navigation in 3D image data
JP7008713B2 (en) Ultrasound assessment of anatomical features
EP3105741B1 (en) Systems for monitoring lesion size trends and methods of operation thereof
JP5498090B2 (en) Image processing apparatus and ultrasonic diagnostic apparatus
US20230070102A1 (en) Volumetric lighting of 3d overlays on 2d images
CN112689478B (en) Ultrasonic image acquisition method, system and computer storage medium
CN105359191A (en) Ultrasonic imaging apparatus and control method thereof
CN109313818A (en) System and method for the illumination in drawing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant