CN111403007A - Ultrasonic imaging optimization method, ultrasonic imaging system and computer-readable storage medium - Google Patents

Ultrasonic imaging optimization method, ultrasonic imaging system and computer-readable storage medium Download PDF

Info

Publication number
CN111403007A
CN111403007A CN201811642211.XA CN201811642211A CN111403007A CN 111403007 A CN111403007 A CN 111403007A CN 201811642211 A CN201811642211 A CN 201811642211A CN 111403007 A CN111403007 A CN 111403007A
Authority
CN
China
Prior art keywords
anatomical structure
target anatomical
information
dimensional
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811642211.XA
Other languages
Chinese (zh)
Inventor
王艾俊
林穆清
邹耀贤
贾洪飞
杨雪梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority to CN201811642211.XA priority Critical patent/CN111403007A/en
Publication of CN111403007A publication Critical patent/CN111403007A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The application discloses an optimization method of ultrasonic imaging, which comprises the steps of obtaining three-dimensional volume data of a tested object obtained through ultrasonic inspection; identifying target anatomical structure information from the three-dimensional volumetric data; automatically setting light source parameters according to the target anatomical structure information; and rendering the three-dimensional volume data by adopting the light source parameters. The application also discloses an ultrasonic imaging system, an ultrasonic image processing system and a computer readable storage medium related to the optimization method of the ultrasonic imaging. The target anatomical structure information is identified from the three-dimensional data, the light source parameters are automatically set according to the target anatomical structure information, and the light source parameters are automatically set to appropriate values, so that the ultrasonic imaging effect is optimized.

Description

Ultrasonic imaging optimization method, ultrasonic imaging system and computer-readable storage medium
Technical Field
The present application relates to the field of medical imaging, and more particularly, to a method, related system, and medium for ultrasound imaging.
Background
The multi-light source volume light has a newly increased rendering effect on the basic three-dimensional image, and aims to improve the three-dimensional stereoscopic impression and enable the rendering effect to be more real. The volume light is referred to as volume light because the light-rendered object visually gives a sense of a three-dimensional space, and represents gloss formed around the object by light which is sprinkled through a certain medium. The conventional light source types mainly include parallel light, point light sources and spot lights, and the multi-light source means that a plurality of light sources are applied to an object in a combined mode to achieve the effect of simulating a real scene. In recent years, the multi-light source volume light technology has been gradually applied to the medical ultrasound three-dimensional imaging technology, and the three-dimensional volume data is adjusted to an optimal rendering state, which can effectively help doctors to obtain more clinical information.
In ultrasonic three-dimensional imaging, the rendering effect of multiple light source volume light is often influenced by multiple light source parameters (number, type, position, contrast, volume light intensity, and the like), and rendering parameters required to be adjusted for different organs and patients are often different. Because the light source parameters in the three-dimensional ultrasound all relate to a complex imaging theory, most doctors have an insufficient understanding of the light source parameters in the three-dimensional ultrasound, and the doctors are often difficult to adjust the light source parameters in a targeted manner according to different anatomical structures and different patient data, so that the imaging effect of the three-dimensional ultrasound cannot be in an optimal state.
Disclosure of Invention
The application provides an optimization method, a related system and a medium for ultrasonic imaging, which aim to solve the technical problem that a user is difficult to manually adjust a light source parameter to a proper value, so that a better imaging effect is difficult to achieve.
The application provides an optimization method of ultrasonic imaging, which comprises the steps of obtaining three-dimensional volume data of a tested object obtained through ultrasonic examination; identifying target anatomical structure information from the three-dimensional volumetric data; automatically setting light source parameters according to the target anatomical structure information; and rendering the three-dimensional volume data by adopting the light source parameters.
The application also provides an ultrasonic imaging system, which comprises an ultrasonic probe, a memory and a processor, wherein the memory is used for storing programs; and the processor is used for realizing the optimization method of the ultrasonic imaging by executing the program stored in the memory.
The present application also provides a computer-readable storage medium characterized by comprising a program executable by a processor to implement the aforementioned optimization method of ultrasound imaging.
According to the optimization method for ultrasonic imaging, the target anatomical structure information is identified from the three-dimensional data, and the light source parameters are automatically set according to the target anatomical structure information, so that the light source parameters are set to be proper values, the dependence of the light source parameter setting on the professional skill and experience of medical staff is reduced, and meanwhile the working efficiency of the medical staff is improved.
Drawings
FIG. 1 is a block diagram of an embodiment of an ultrasound imaging system;
FIG. 2 is a flow diagram of one embodiment of a method for optimization of ultrasound imaging;
FIG. 3 is a flow chart of another embodiment of a method for optimizing ultrasound imaging;
fig. 4 is a flow chart of another embodiment of an optimization method for ultrasound imaging.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings).
Fig. 1 is a block diagram of a corresponding structure of an ultrasound imaging system 10. The ultrasound imaging system 10 may include an ultrasound probe 100, a transmit/receive selection switch 101, a transmit/receive sequence controller 102, a processor 103, and a display 104. The transmission/reception sequence controller 102 is configured to generate a transmission sequence for exciting the ultrasound probe 100 through the transmission/reception selection switch 101 to transmit an ultrasonic wave to the subject, and a reception sequence for controlling the ultrasound probe 100 through the transmission/reception selection switch 101 to receive an ultrasonic echo returned through the subject, thereby obtaining an ultrasonic echo signal/data. The processor 103 processes the ultrasound echo signals/data to obtain three-dimensional volume data of the subject, thereby optimizing three-dimensional ultrasound imaging based on the three-dimensional volume data. The three-dimensional volumetric data obtained by the processor 103 may be stored in the memory 105, and the memory 105 may also be used to store programs for execution by the processor to optimize the ultrasound image. In some embodiments, the processor 103 may output the non-optimized ultrasound image or the optimized three-dimensional ultrasound image to the display 104 for display.
In one embodiment, the display 104 may be a display attached to the ultrasound device, a display independent from the ultrasound device, or a display screen of an electronic device such as a mobile phone or a tablet computer.
The embodiment of the present application further provides a computer-readable storage medium, where multiple program instructions are stored, and after the multiple program instructions are called and executed by the processor 103, some or all of the steps or any combination of the steps in the following method embodiments of the present application may be executed.
As shown in fig. 2, in an embodiment, an optimization method of ultrasound imaging is provided, which specifically includes the following steps:
s201 acquires three-dimensional volume data of a test object obtained by ultrasonic examination.
In one embodiment, a three-dimensional ultrasound scan may be performed on a subject by an ultrasound probe 100 in an ultrasound imaging system 10, an ultrasound wave is transmitted to the subject, and an echo of the ultrasound wave is received to obtain an ultrasound echo signal, and the processor 103 processes the ultrasound echo signal to obtain three-dimensional volume data of the subject. The three-dimensional volume data may be obtained by processing the ultrasound echo signals returned by the subject received in real time, may be stored in the memory 105 of the ultrasound imaging system in advance, or may be obtained by processing the ultrasound echo signals stored in the memory 105 in advance.
In one embodiment, the processor may also be a processor in a computer, a server, a workstation or other electronic terminal or platform independent from the ultrasound imaging system, and the processor may obtain the ultrasound echo signals or the three-dimensional volume data obtained and stored by the ultrasound imaging system through a wired or wireless network or through a data transmission device (e.g., a flash memory, a hard disk, an optical disk, etc.), and then perform the methods in the embodiments of the present application on the ultrasound echo signals or the three-dimensional volume data.
The three-dimensional data is a data set which represents each pixel position of an image by a three-dimensional space coordinate, and each position corresponds to one pixel point and one corresponding pixel value. In one embodiment, the three-dimensional volume data may be static three-dimensional volume data or may be a dynamic three-dimensional volume data set.
s202 identifies target anatomical structure information from the three-dimensional volume data.
In one embodiment, the target anatomical structure information includes at least one of type information, position information, and orientation information of the target anatomical structure, and the processor 103 may identify one of the type information, the position information, or the orientation information from the three-dimensional volume data alone, a combination of any two of the type information, the position information, and the orientation information, or all of the type information, the position information, and the orientation information. In some embodiments, the target anatomical structure information is not limited to type information, position information, and orientation information, but may include other information related to the target anatomical structure.
In one embodiment, after acquiring the three-dimensional volume data of the subject, the processor 103 may automatically identify the target anatomical structure information from the three-dimensional volume data, or identify the target anatomical structure information from the three-dimensional volume data by receiving an instruction input by a user. The target anatomy structure can be one or more, and the specific numerical value can be set according to the examination requirement.
s203 automatically setting light source parameters according to the target anatomical structure information.
In one embodiment, the light source parameters include at least one of light source type, volume light contrast, volume light intensity, number of light sources, light source direction and light source position, and the light source parameters to be adjusted are different for different organs and different patients, and the processor 103 may set one or a combination of the light source parameters according to the target anatomical structure information. In some embodiments, the light source parameters are not limited to the light source type, the volume light contrast, the volume light intensity, the number of light sources, the light source direction, and the light source position, but may also include other relevant parameters. The automatic setting of the light source parameters according to the target anatomical structure information may be automatically matching the light source parameters by using a table look-up method, automatically matching the light source parameters by using a function construction method, or other methods for automatically setting the light source parameters.
In one embodiment, type information of a target anatomical structure is identified from three-dimensional volume data, and at least one of a light source type, a volume light contrast, and a volume light intensity is automatically set according to the type information of the target anatomical structure. The light source type includes parallel light, point light source, spotlight, etc., and the light source type automatically set in this embodiment may be a separately set parallel light, point light source, spotlight, or other light source type, or may be a combination of several types of light sources. The volume light is the gloss formed around an object after light is sprinkled through a certain medium, and the object rendered by the light visually gives a sense of three-dimensional space. According to the type of the target anatomical structure, at least one of the light source type, the volume light contrast and the volume light intensity is adjusted, so that a real scene can be simulated more truly, and doctors are helped to acquire more clinical information. For example, when the anatomical structure is a bladder, the processor 103 automatically sets the light source parameters to be point light sources and places the point light sources inside the bladder based on the cavity structure of the bladder, so that the rendering effect of bladder ultrasonic imaging is more transparent. When the anatomical structure is a fetal face, the doctor mainly focuses on the fetal face information, and the processor 103 automatically sets the light source type to be a spotlight to highlight the fetal face information, and automatically adjusts the volume light contrast and intensity to play a role in simulating the atmosphere effect of the fetal face in amniotic fluid.
In one embodiment, directional information of a target anatomical structure is identified from three-dimensional volume data, and at least one of the number of light sources and the direction of the light sources is automatically set according to the directional information of the target anatomical structure to optimize the effect of ultrasonic imaging. In this embodiment, the number of light sources and the direction of the light sources may be automatically set in combination with one or more of the automatically set type, position, volume light contrast, or volume light intensity of the light sources, or the number of light sources and the direction of the light sources may be automatically set independently. For example, when the anatomical structure is a fetal spine, the processor 103 identifies the direction information of the fetal spine from the three-dimensional volume data, and automatically sets the light source direction to be directly in front of the fetal spine, i.e. the normal direction of the fetal spine, or a direction forming a certain angle with the normal direction according to the direction information of the fetal spine. When the anatomical structure is a fetal face, a doctor mainly focuses on related information of lips, a nose, a face and the like of the fetal face, and if only single light source rendering is adopted, only partial information related to the irradiation direction of the fetal face and the light source can be obtained.
In one embodiment, position information of a target anatomical structure is identified from three-dimensional volume data, and a light source position is automatically set according to the position information of the target anatomical structure. In this embodiment, the light source position may be automatically set in combination with one or more of the automatically set light source type, the volume light contrast, the volume light intensity, the light source number, or the light source direction, or the light source position may be automatically set alone.
And s204, rendering the three-dimensional volume data by using the light source parameters.
In one embodiment, the light source parameters automatically set in s203 are used to render the three-dimensional volume data, and the rendering may be performed by surface rendering or volume rendering. The surface rendering is based on the segmented volume data, a series of vertex, surface and normal information is generated by using an iso-surface construction algorithm (Marching Cubes), and then rendering and rendering are performed by using a computer graphics technology. The surface rendering can efficiently highlight the surface detail information of the region of interest, for example, for bladder volume data, the surface rendering is performed according to automatically set light source parameters such as ambient light, diffuse reflected light and specular light, and the texture structure of the bladder wall can be effectively highlighted. The volume rendering is to perform transparency fusion and cumulative sampling on volume data on a fixed Ray path by using a Ray tracing algorithm (Ray Casting), so as to obtain color values of a plurality of voxels, thereby completing rendering and rendering. The volume rendering can display the surface and internal information of an object with high quality, for example, for the face data of a fetus, the volume rendering is carried out according to the automatically set light source parameters such as the number, the type, the volume light intensity and the like of the light source, and the whole information of the face of the fetus can be effectively highlighted. In this embodiment, different rendering modes are automatically or manually selected according to different target anatomical structures, so as to achieve an optimal rendering effect and help a doctor to obtain more clinical information.
In one embodiment, after rendering the three-dimensional volume data with the light source parameters, the processor 103 is further configured to generate a three-dimensional ultrasound image based on the rendered three-dimensional volume data and output the three-dimensional ultrasound image to the display 104 in the ultrasound imaging system 10 for display. The processor 103 may output the whole or a part of the rendered target anatomical structure image to the display 104 for display, or may output the whole or a part of the rendered three-dimensional volume data to the display 104 for display. Acquiring three-dimensional volume data of the object to be tested obtained through ultrasonic examination corresponding to s201, wherein in one case, if static three-dimensional volume data is obtained through ultrasonic examination, the rendered ultrasonic image displayed by the display 104 is a static image; in another case, if dynamic three-dimensional volume data is obtained through ultrasound examination, the rendered ultrasound image displayed on the display may be a dynamic image that changes with time, i.e., displayed in a video manner, or may be a static image.
As shown in fig. 3, in an embodiment, before the step s202 of identifying the target anatomical structure information from the three-dimensional volume data, the step of: s302 generates a three-dimensional ultrasound image based on the obtained three-dimensional volume data and outputs the three-dimensional ultrasound image to a display for display, which may be performed by the display 104 in the ultrasound imaging system 10. s202 identifying target anatomy information from the three-dimensional volumetric data includes, s303 receiving an identification made by a user on the displayed three-dimensional ultrasound image, identifying location information of the target anatomy according to the identification; and/or receiving an input instruction of a user for the type of the target anatomical structure, and identifying the type information of the target anatomical structure according to the input instruction.
In one embodiment, a three-dimensional ultrasound image is generated based on three-dimensional volume data obtained by ultrasound examination and output to a display 104 for display, a processor 103 receives an identifier made by a user on the displayed ultrasound image, where the identifier may be a contour of a target anatomical structure drawn on the ultrasound image by the user, a region of the target anatomical structure drawn in a frame, or coordinate information of the identified target anatomical structure or other possible identifier forms, and the identifier may be input by a click operation or a sliding operation of the ultrasound image by an input tool such as a keyboard or a mouse, or the like, for identifying the target anatomical structure.
In one embodiment, a three-dimensional ultrasonic image is displayed based on the obtained three-dimensional volume data, an input instruction of a user for the type of the target anatomical structure is received, and the type information of the target anatomical structure is identified according to the input instruction. The input instruction may be a type option of the anatomical structure generated on the display 104 by the processor 103 after the initial identification for the user to click, or a preset type option of the anatomical structure displayed on the display 104 for the user to click, or an instruction of the type of the anatomical structure entered by the user through a keyboard or an instruction of another user input target anatomical structure type.
Displaying a three-dimensional ultrasonic image based on the obtained three-dimensional volume data, wherein the processor 103 can identify the position information of the target anatomical structure according to the identification made by the user, and can receive an input instruction of the user on the type of the target anatomical structure to identify the type information of the target anatomical structure; the processor may also identify only one of the location information and the type information of the anatomical structure, and the other anatomical structure information may be identified in combination with the other embodiments.
In one embodiment, the step s202 of identifying the target anatomical structure information from the three-dimensional volume data may employ a machine learning method to automatically identify at least one of type information and position information in the target anatomical structure information. The machine learning method comprises a deep learning method, a feature class machine learning method and the like, and the machine learning method can be independently applied to one machine learning method or can be combined and applied to a plurality of machine learning methods.
For the deep learning method, the identifying of the target anatomical structure information from the three-dimensional volume data s202 comprises: and inputting the three-dimensional volume data into a neural network model formed by training a preset sample database, wherein the preset sample database is a database which is formed by a large number of anatomical structures and corresponding characteristic information thereof. And directly regressing a corresponding region of interest (VOI) through a neural network model, wherein the region of interest can be a region of the target anatomical structure or a region containing the target anatomical structure. At least one of type information and location information of a target anatomical structure within the region of interest is acquired.
For example, the Bounding-Box method based on deep learning adopts a stack base layer convolution layer and a full connection layer to perform characteristic learning and parameter regression on a preset database so as to construct a neural network model, when the three-dimensional data of the tested object needs to be identified, the three-dimensional data of the tested object is input into the neural network model, and at least one of position information or type information of a corresponding region of interest is directly regressed through the neural network, wherein common networks include R-CNN, Fast-RCNN, SSD, YO L O and the like.
For the feature-based machine learning method, the step s202 of identifying the target anatomical structure information from the three-dimensional volume data includes determining a group of regions of interest in the three-dimensional volume data, where the group of regions of interest includes multiple regions of interest, and for example, a sliding window method may be adopted to determine the group of regions of interest.
As shown in fig. 4, in one embodiment, the aforementioned location information for manually identifying the target anatomy may be combined with type information for automatically identifying the target anatomy. That is, s401 acquires three-dimensional volume data of the subject by the ultrasound examination. s402 generates a three-dimensional ultrasound image based on the obtained three-dimensional volume data and outputs the three-dimensional ultrasound image to a display for display. s403 receives an identification made by the user on the displayed three-dimensional ultrasound image. s404 identifying location information of the target anatomy from the identification. And s405, recognizing the type information of the target anatomical structure from the three-dimensional volume data at the position information of the target anatomical structure by adopting a machine learning method, and combining the position information with the position information of the target anatomical structure, so that the three-dimensional volume data range required to be processed by the machine learning method is reduced, and the efficiency and accuracy of recognizing the position information and the type information of the anatomical structure are improved. The machine learning method may be a deep learning method, a feature class machine learning method, or other machine learning methods or a combination of machine learning methods. s406 automatically setting light source parameters according to the target anatomy information. s407 rendering the three-dimensional volume data using the light source parameters. The description of the corresponding steps is as described above, and is not repeated here.
In one embodiment, based on the position information of the target anatomical structure identified from the three-dimensional volume data in any of the previous embodiments, the direction information of the target anatomical structure is identified according to the position information of the target anatomical structure. The contour line of the target anatomical structure may be determined based on the position information of the target anatomical structure, and the normal vector of the target anatomical structure may be determined by the contour line, so as to determine the direction information of the target anatomical structure. For example, when the target anatomical structure is the spine, the position information of the spine is known, and the skeleton of the binary image is extracted, so that the directions of the long axis and the short axis of the spine can be obtained, and further the direction information of the spine can be obtained. And when the target anatomical structure is the face of the fetus, acquiring the direction of the related structure of the face of the fetus according to the detected position information of eyes, nose, mouth and the like of the fetus and by combining corresponding gradient characteristics.
In one embodiment, before acquiring three-dimensional volume data of a tested object obtained through an ultrasonic examination, the method further includes acquiring an ultrasonic probe type and/or an examination mode, and determining a preset sample database according to the ultrasonic probe type and/or the examination mode. The probe types comprise an ultrasonic volume probe or an area array probe and the like, and different probe types are suitable for different anatomical structure types; the examination mode refers to the area of the ultrasonic examination, i.e. including abdomen, kidney, fetus or prostate. The ultrasound probe type and/or examination mode may be entered manually by a user or may be acquired by the processor 103 by automatic detection. According to the acquired type and/or inspection mode of the ultrasonic probe, a preset sample database matched with the selected type and/or inspection mode of the probe can be obtained, the operation amount of matching the characteristic information with the preset sample database is reduced, and the process of identifying the target anatomical structure information from the three-dimensional volume data is more accurate and faster.
When all or part of the functions in the above embodiments are implemented by computer program, the program may be stored in a computer readable storage medium, the storage medium may include Read Only Memory, Random Access Memory, magnetic Disk, optical Disk, Hard Disk, etc. when the program is executed by computer, the program may be stored in a Memory of the Device, or when the program is executed by a Processor, all or part of the functions may be implemented, the Memory may be volatile Memory (volatile Memory), such as Random Access Memory (RAM), or non-volatile Memory, such as Read Only Memory (Read Only Memory, ROM), flash Memory (flash Memory), Hard Disk Drive (Hard Disk Drive, HDD), or Solid State Drive (State-State Drive), or when the program is executed by Processor, the Processor may be implemented in a Processor, a microprocessor, a Memory, a Processor, a microprocessor, a microcontroller, a Processor, a microcontroller, a Processor, a microprocessor, a Memory, a Processor, a microcontroller, a Processor, a microprocessor, a microcontroller, a microprocessor, a storage, a computer, a.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (14)

1. An optimization method of ultrasonic imaging is characterized by comprising the following steps,
acquiring three-dimensional volume data of a tested object obtained through ultrasonic inspection;
identifying target anatomical structure information from the three-dimensional volumetric data;
automatically setting light source parameters according to the target anatomical structure information;
and rendering the three-dimensional volume data by adopting the light source parameters.
2. The method of optimizing ultrasound imaging according to claim 1,
the target anatomical structure information includes at least one of type information, position information, and orientation information of the target anatomical structure.
3. The method of optimizing ultrasound imaging according to claim 2,
before identifying the target anatomical structure information from the three-dimensional volume data, generating a three-dimensional ultrasonic image based on the obtained three-dimensional volume data and outputting the three-dimensional ultrasonic image to a display for displaying;
identifying target anatomical structure information from the three-dimensional volumetric data includes,
receiving an identification made by a user on the three-dimensional ultrasonic image, and identifying the position information of the target anatomical structure according to the identification; and/or
Receiving an input instruction of a user for a target anatomical structure type, and identifying type information of the target anatomical structure according to the input instruction.
4. The method of optimizing ultrasound imaging according to claim 2,
identifying target anatomical structure information from the three-dimensional volumetric data includes,
identifying target anatomical structure information from the three-dimensional volume data using a machine learning method, the target anatomical structure information including at least one of type information and position information of a target anatomical structure.
5. The method of optimizing ultrasound imaging according to claim 4, wherein the machine learning method is a deep learning method, and wherein identifying target anatomical structure information from the three-dimensional volume data using the machine learning method comprises:
inputting three-dimensional volume data into a neural network model, wherein the neural network model is formed by training through a preset sample database;
directly regressing a corresponding region of interest through a neural network model;
at least one of type information and location information of a target anatomical structure within the region of interest is acquired.
6. The method of optimizing ultrasound imaging according to claim 4, wherein the machine learning method is a feature class machine learning method, and identifying target anatomical structure information from the three-dimensional volume data using the machine learning method comprises:
determining a set of regions of interest in the three-dimensional volumetric data;
extracting the characteristics of each region of interest;
matching the extracted features with a preset sample database, and determining whether the current region of interest contains a target anatomical structure;
when the matching is successful, at least one of type information and position information of the target anatomical structure is acquired using a classifier.
7. The method of optimizing ultrasound imaging according to claim 2,
identifying target anatomical structure information from the three-dimensional volume data further comprises: generating a three-dimensional ultrasonic image based on the obtained three-dimensional volume data and outputting the three-dimensional ultrasonic image to a display for displaying;
identifying target anatomical structure information from the three-dimensional volumetric data comprises:
receiving an identification made by a user on the displayed three-dimensional ultrasonic image;
identifying location information of the target anatomical structure according to the identification;
and identifying type information of the target anatomical structure from the three-dimensional volume data at the position information of the target anatomical structure by adopting a machine learning method.
8. The optimization method of ultrasonic imaging according to any one of claims 5 to 7, wherein before acquiring three-dimensional volume data of a tested object obtained through ultrasonic examination, the method further comprises:
acquiring an ultrasonic probe type and/or an inspection mode;
and determining a preset sample database according to the type of the ultrasonic probe and/or the inspection mode.
9. The method of optimizing ultrasound imaging according to any of claims 2 to 8,
identifying location information of a target anatomical structure from the three-dimensional volume data;
and identifying the direction information of the target anatomical structure according to the position information of the target anatomical structure.
10. The method of optimizing ultrasound imaging according to claim 2,
the light source parameters include at least one of light source type, volume light contrast, volume light intensity, number of light sources, light source direction, and light source position.
11. The method of optimizing ultrasound imaging according to claim 10, wherein automatically setting light source parameters according to the target anatomical structure information comprises:
automatically setting at least one of a light source type, a volume light contrast and a volume light intensity according to the type information of the target anatomical structure; and/or
Automatically setting at least one of a number of light sources and a direction of the light sources according to the direction information of the target anatomical structure; and/or
And automatically setting the position of the light source according to the position information of the target anatomical structure.
12. An ultrasound imaging system, characterized in that the ultrasound imaging system comprises:
the ultrasonic probe is used for transmitting ultrasonic waves to a tested object and receiving ultrasonic echoes so as to acquire ultrasonic signals obtained by carrying out ultrasonic detection on the tested object;
a memory for storing a program;
a processor for implementing the method of any one of claims 1-11 by executing a program stored by the memory.
13. An ultrasound image processing system, comprising:
a memory for storing a program;
a processor for implementing the method of any one of claims 1-11 by executing a program stored by the memory.
14. A computer-readable storage medium, comprising a program executable by a processor to implement the method of any one of claims 1-11.
CN201811642211.XA 2018-12-29 2018-12-29 Ultrasonic imaging optimization method, ultrasonic imaging system and computer-readable storage medium Pending CN111403007A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811642211.XA CN111403007A (en) 2018-12-29 2018-12-29 Ultrasonic imaging optimization method, ultrasonic imaging system and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811642211.XA CN111403007A (en) 2018-12-29 2018-12-29 Ultrasonic imaging optimization method, ultrasonic imaging system and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111403007A true CN111403007A (en) 2020-07-10

Family

ID=71435812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811642211.XA Pending CN111403007A (en) 2018-12-29 2018-12-29 Ultrasonic imaging optimization method, ultrasonic imaging system and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111403007A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001998A (en) * 2020-09-02 2020-11-27 西南石油大学 Real-time simulation ultrasonic imaging method based on OptiX and Unity3D virtual reality platforms

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101884551A (en) * 2009-05-15 2010-11-17 深圳迈瑞生物医疗电子股份有限公司 Method for increasing self-adjusting performance of ultrasonic Doppler imaging and ultrasonic system thereof
CN103690137A (en) * 2014-01-07 2014-04-02 深圳市开立科技有限公司 Endoscope light source brightness automatic adjusting method and device
CN108230261A (en) * 2016-12-09 2018-06-29 通用电气公司 Full-automatic image optimization based on automated organ identification
CN108720868A (en) * 2018-06-04 2018-11-02 深圳华声医疗技术股份有限公司 Blood flow imaging method, apparatus and computer readable storage medium
WO2018205274A1 (en) * 2017-05-12 2018-11-15 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic device, and method and system for transforming display of three-dimensional ultrasonic image thereof
CN108876764A (en) * 2018-05-21 2018-11-23 北京旷视科技有限公司 Render image acquiring method, device, system and storage medium
CN109063740A (en) * 2018-07-05 2018-12-21 高镜尧 The detection model of ultrasonic image common-denominator target constructs and detection method, device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101884551A (en) * 2009-05-15 2010-11-17 深圳迈瑞生物医疗电子股份有限公司 Method for increasing self-adjusting performance of ultrasonic Doppler imaging and ultrasonic system thereof
CN103690137A (en) * 2014-01-07 2014-04-02 深圳市开立科技有限公司 Endoscope light source brightness automatic adjusting method and device
CN108230261A (en) * 2016-12-09 2018-06-29 通用电气公司 Full-automatic image optimization based on automated organ identification
WO2018205274A1 (en) * 2017-05-12 2018-11-15 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic device, and method and system for transforming display of three-dimensional ultrasonic image thereof
CN108876764A (en) * 2018-05-21 2018-11-23 北京旷视科技有限公司 Render image acquiring method, device, system and storage medium
CN108720868A (en) * 2018-06-04 2018-11-02 深圳华声医疗技术股份有限公司 Blood flow imaging method, apparatus and computer readable storage medium
CN109063740A (en) * 2018-07-05 2018-12-21 高镜尧 The detection model of ultrasonic image common-denominator target constructs and detection method, device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001998A (en) * 2020-09-02 2020-11-27 西南石油大学 Real-time simulation ultrasonic imaging method based on OptiX and Unity3D virtual reality platforms

Similar Documents

Publication Publication Date Title
US20210177373A1 (en) Ultrasound system with an artificial neural network for guided liver imaging
JP6629094B2 (en) Ultrasound diagnostic apparatus, medical image processing apparatus, and medical image processing program
CN110087550B (en) Ultrasonic image display method, equipment and storage medium
US11521363B2 (en) Ultrasonic device, and method and system for transforming display of three-dimensional ultrasonic image thereof
US10157500B2 (en) Utilizing depth from ultrasound volume rendering for 3D printing
US10692272B2 (en) System and method for removing voxel image data from being rendered according to a cutting region
CN113905670A (en) Guided ultrasound imaging
CN111836584B (en) Ultrasound contrast imaging method, ultrasound imaging apparatus, and storage medium
CN115131427A (en) System and method for automatic light placement for medical visualization
CN111142753A (en) Interactive method, information processing method and storage medium
CN112862955A (en) Method, apparatus, device, storage medium and program product for building three-dimensional model
CN111403007A (en) Ultrasonic imaging optimization method, ultrasonic imaging system and computer-readable storage medium
EP3933848A1 (en) Vrds 4d medical image processing method and product
CN108876783B (en) Image fusion method and system, medical equipment and image fusion terminal
US20230137369A1 (en) Aiding a user to perform a medical ultrasound examination
US11094116B2 (en) System and method for automatic generation of a three-dimensional polygonal model with color mapping from a volume rendering
JP2018149055A (en) Ultrasonic image processing device
WO2021120059A1 (en) Measurement method and measurement system for three-dimensional volume data, medical apparatus, and storage medium
WO2020133236A1 (en) Spinal imaging method and ultrasonic imaging system
WO2022134049A1 (en) Ultrasonic imaging method and ultrasonic imaging system for fetal skull
US11532244B2 (en) System and method for ultrasound simulation
CN112689478B (en) Ultrasonic image acquisition method, system and computer storage medium
WO2021081842A1 (en) Intestinal neoplasm and vascular analysis method based on vrds ai medical image and related device
CN115619941A (en) Ultrasonic imaging method and ultrasonic equipment
IL303325A (en) Ultrasound simulation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination