CN114287963A - Image processing method, image processing device, electronic equipment and computer readable medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN114287963A
CN114287963A CN202111663348.5A CN202111663348A CN114287963A CN 114287963 A CN114287963 A CN 114287963A CN 202111663348 A CN202111663348 A CN 202111663348A CN 114287963 A CN114287963 A CN 114287963A
Authority
CN
China
Prior art keywords
image
artifact
region
initial
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111663348.5A
Other languages
Chinese (zh)
Inventor
文银刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Haifu Medical Technology Co ltd
Original Assignee
Chongqing Haifu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Haifu Medical Technology Co ltd filed Critical Chongqing Haifu Medical Technology Co ltd
Priority to CN202111663348.5A priority Critical patent/CN114287963A/en
Publication of CN114287963A publication Critical patent/CN114287963A/en
Priority to PCT/CN2022/139412 priority patent/WO2023125058A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses an image processing method, which comprises the following steps: acquiring a first image and a second image generated when the ultrasonic probe is positioned at different positions, wherein the generation time interval of the first image and the second image is less than a preset time threshold; determining an initial overlapping area of the first image and the second image according to a preset mark of the first image and a preset mark of the second image; and determining a target image according to the non-artifact area of the first image, the non-artifact area of the second image and the initial superposition area. The clear parts, namely non-artifact areas, in the two images are comprehensively utilized, the target image without the artifact is obtained through superposition processing, the ultrasonic probe does not need to be lifted and lowered remotely, the skin does not need to be tightly pressed, the high-quality monitoring image can be safely and efficiently obtained, and the treatment effect is favorably improved. The invention also discloses an image processing device, an electronic device and a computer readable medium.

Description

Image processing method, image processing device, electronic equipment and computer readable medium
Technical Field
The invention relates to the technical field of micro-noninvasive treatment image monitoring, in particular to an image processing method, an image processing device, electronic equipment and a computer readable medium.
Background
In the process of micro-noninvasive surgery treatment, B-scan ultrasound (B-scan ultrasound) is often used as a real-time image monitoring means, but artifacts often exist on the currently obtained monitored image, that is, one or more arc-shaped sections exist on the image and cover and float on a real monitored object, so that a part of the real monitored object is shielded, and the quality of the monitored image is influenced.
Disclosure of Invention
Therefore, the invention provides an image processing method, an image processing device, an electronic device and a computer readable medium, which are used for solving the problem of poor image quality caused by the existence of artifacts in a monitored image in the prior art.
The invention provides an image processing method in a first aspect, wherein the method comprises the following steps:
acquiring a first image and a second image generated when an ultrasonic probe is positioned at different positions, wherein the generation time interval of the first image and the second image is less than a preset time threshold;
determining an initial overlapping area of the first image and the second image according to a preset mark of the first image and a preset mark of the second image;
and determining a target image according to the non-artifact region of the first image, the non-artifact region of the second image and the initial superposition region.
In some embodiments, the step of determining the initial overlapping area of the first image and the second image according to the preset mark of the first image and the preset mark of the second image comprises:
identifying a preset mark of the first image and a preset mark of the second image;
moving the first image or the second image such that a preset mark of the first image overlaps a preset mark of the second image;
and determining a part formed by overlapping the first image and the second image as the initial overlapping area.
In some embodiments, before the step of determining the portion of the first image and the second image superimposed as the initial superimposition area, the method further includes:
and preprocessing the first image and the second image to enable the parts of the first image and the second image which can be superposed to have the same brightness and contrast.
In some embodiments, the non-artifact regions of the first image and the non-artifact regions of the second image each comprise a separate non-artifact region and a non-artifact region corresponding to the initial overlap region;
the step of determining a target image from the non-artifact region of the first image, the non-artifact region of the second image and the initial overlap region comprises:
replacing the initial overlapping area according to the non-artifact area corresponding to the initial overlapping area to obtain a final overlapping area;
and splicing the independent non-artifact region of the first image, the independent non-artifact region of the second image and the final superposition region to obtain the target image.
In some embodiments, the replacing the initial overlapping region according to the non-artifact region corresponding to the initial overlapping region includes:
replacing a part of the initial superposition region corresponding to an artifact region in the first image with a non-artifact region of a corresponding position range in the second image;
replacing a part of the initial superposition region corresponding to the artifact region in the second image with a non-artifact region of a corresponding position range in the first image;
determining a target area from the non-artifact areas corresponding to the initial superposition area according to the pixel values of the first image and the second image, and replacing the non-artifact areas in the initial superposition area with the target area.
In some embodiments, the first image is generated when the ultrasound probe is at a position closer to a target monitoring location than when the second image is generated, and the step of determining a target region from the non-artifact region corresponding to the initial overlap region based on the pixel values of the first image and the pixel values of the second image comprises:
and under the condition that the difference value between the pixel value of the first image and the pixel value of the second image is smaller than a preset pixel threshold value, determining the target area according to the pixel information of the first image, or determining the target image according to the average value of the pixel information of the first image and the pixel information of the second image.
In some embodiments, the step of acquiring the first and second images generated while the ultrasound probe is at different locations comprises:
controlling the ultrasonic probe to vibrate in a direction vertical to the target position according to a preset vibration frequency according to the vibration signal and transmitting a detection ultrasonic wave;
generating a plurality of images according to the ultrasonic echoes received by the ultrasonic probe;
and determining the first image and the second image from the plurality of images according to the vibration signal.
In some embodiments, the step of controlling the ultrasonic probe to vibrate in a direction perpendicular to the target position at a preset vibration frequency and emit a probe ultrasonic wave according to the vibration signal includes:
and sending the vibration signal to a vibration mechanism so that the vibration mechanism pushes the ultrasonic probe to vibrate in a direction vertical to the target position according to the preset vibration frequency and sends a detection ultrasonic wave.
In some embodiments, the oscillating mechanism is a cam mechanism, the cam mechanism has a cycle angle in the short diameter and a cycle angle in the long diameter both in the interval [90 °, 140 ° ], and the cam mechanism has a cycle angle between the short diameter and the long diameter in the interval [80 °, 180 ° ].
A second aspect of the present invention provides an image processing apparatus, wherein the image processing apparatus includes:
the acquisition module is used for acquiring a first image and a second image which are generated when the ultrasonic probe is positioned at different positions and the generation time interval of which is smaller than a preset time threshold;
the first processing module is used for determining an initial overlapping area of the first image and the second image according to a preset mark of the first image and a preset mark of the second image;
and the second processing module is used for determining a target image according to the non-artifact region of the first image, the non-artifact region of the second image and the initial superposition region.
A third aspect of the present invention provides an electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon which, when executed by the one or more processors, cause the one or more processors to implement the image processing method as previously described;
one or more I/O interfaces connected between the processor and the memory and configured to enable information interaction between the processor and the memory.
A fourth aspect of the invention provides a computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the image processing method as set forth above.
The invention has the following advantages:
by adopting the image processing method provided by the embodiment of the invention, the first image and the second image which are generated when the ultrasonic probe is positioned at different positions and have the generation time interval smaller than the preset time threshold are utilized, the initial overlapping area of the first image and the second image is determined according to the preset mark of the first image and the preset mark of the second image, and the target image is determined according to the non-artifact area of the first image, the non-artifact area of the second image and the initial overlapping area. The method comprehensively utilizes clear parts, namely non-artifact areas, in the two images, obtains a target image without artifacts through superposition processing, can safely and efficiently obtain a high-quality monitoring image without lifting an ultrasonic probe remotely or pressing the skin tightly, and can truly and accurately reflect the condition of body organ tissues because the generation time interval of the first image and the second image is less than a preset time threshold value and the human body state is almost unchanged, thereby being beneficial to improving the treatment effect.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Fig. 1a is a schematic view of a monitoring image of an ultrasonic probe according to an embodiment of the present invention when the vibration of the ultrasonic probe is at a high position and a low position;
FIG. 1b is a schematic diagram of image overlay according to an embodiment of the present invention;
fig. 2 is a first schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a second flowchart illustrating an image processing method according to an embodiment of the present invention;
fig. 4 is a third schematic flowchart of an image processing method according to an embodiment of the present invention;
fig. 5 is a fourth schematic flowchart of an image processing method according to an embodiment of the present invention;
fig. 6 is a fifth flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 8 is a block diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
When the terms "comprises" and/or "comprising … …" are used in this specification, the presence of stated features, integers, steps, operations, elements, and/or components are specified, but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The embodiments of the invention may be described with reference to plan and/or cross-sectional views in idealized schematic representations of the invention. Accordingly, the example illustrations can be modified in accordance with manufacturing techniques and/or tolerances.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present invention and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In the micro non-invasive surgery such as FUS (focused ultrasound) treatment, B-ultrasound is often used as a real-time image monitoring means, but an ultrasound probe is often arranged on a mechanical device, and cannot use a coupling agent, but only water and other media are used, so that the ultrasound probe cannot be in good contact with the skin, other media exist between the ultrasound probe and the skin, further, the ultrasound is reflected for multiple times to generate artifacts on the image, that is, a reflected image of one or more FUS (focused ultrasound) transducer surfaces exists on the image, which is expressed as one or more arc shapes, and the reflected image covers and floats on a real monitored object, so that a part of the real monitored object is shielded, and the quality of the monitored image is influenced, and is not only lower than the anatomical expressive force of MRI (nuclear magnetic resonance)/CT, but even lower than the ultrasonic effect for diagnosis.
When the ultrasonic probe completely contacts the skin, the artifact can be obviously reduced or even completely disappear, but at the moment, the ultrasonic probe can block the high-intensity focused ultrasound of the FUS and the ultrasonic probe presses the skin, so that the energy of the FUS is attenuated, the focus form is not good, and even the skin is scalded and the ultrasonic probe is damaged. Therefore, a safe and effective means for solving the artifact problem is proposed, namely, the ultrasonic probe is lifted and lowered manually continuously, so that body tissues and a target area can be seen clearly when the FUS is not used for irradiation, and the safety of skin and the ultrasonic probe can be ensured when the FUS is used for irradiation.
The method can solve the problem of the artifact to a certain extent, but the operation intensity of a doctor is higher because the ultrasonic probe needs to be controlled to be continuously far away from or close to the skin; in some cases, the doctor needs to press the ultrasound probe tightly against the skin in order to see the monitored subject, and also causes skin abrasion, and the risk of skin abrasion is greater when the ultrasound probe is further moved horizontally when the pressing is already tight in the direction perpendicular to the target position; mechanical mechanisms are also prone to damage when moved at high frequencies and over long distances. Therefore, it is proposed to control the ultrasound probe to continuously lift in an automatic control manner, which reduces the operation of the doctor, but the efficiency of moving the ultrasound probe at a high frequency and a long distance is still low, and the images cannot be kept in a stable and clear state, which still has adverse effects on the treatment.
In view of the above, the present invention has been studied carefully to find that when the ultrasound probe moves up and down to approach or move away from the skin, the body tissue moves in the image by the same distance, for example, every 1cm of distance of the ultrasound probe from the skin, the position of the body tissue in the image moves down by 1cm and is closer to the bottom of the image (i.e., the position of the ultrasound probe), but the position of the artifact in the image does not move according to this rule, and the artifact is generated by reflection, and the moving speed of the artifact in the image is 2 times (through one reflection) or 4 times (through two reflections) the moving speed of the body tissue in the image. Fig. 1 is a schematic diagram of a monitoring image when the vibration of the ultrasound probe is at a high position and a low position, as shown in fig. 1a, a diagram a shown on the left side is an image when the vibration of the ultrasound probe is at the high position, a diagram B shown on the right side is an image when the vibration of the ultrasound probe is at the low position, and a dotted line in the diagram B is an image in which an artifact area and skin in the diagram a are mapped to the diagram B according to the same position, and the ultrasound probe is closer to the skin when the ultrasound probe is at the high position than when the ultrasound probe is at the low position, so the skin in the diagram a is closer to the bottom of the image than the skin in the diagram B.
Therefore, the phenomenon can be fully utilized, two images generated when the ultrasonic probe is positioned at different positions are obtained according to the characteristic that the moving speed of the body tissue in the image is smaller than the moving speed of the artifact in the image when the ultrasonic probe moves rapidly, clear parts, namely non-artifact areas, in the two images are comprehensively utilized, a target image without the artifact is obtained through superposition, the ultrasonic probe does not need to be lifted and lowered remotely, the skin does not need to be tightly pressed, and the purpose of safely and efficiently eliminating the artifact in the image can be achieved.
Accordingly, in a first aspect, an embodiment of the present invention provides an image processing method, as shown in fig. 2, the method may include the following steps:
in step S1, a first image and a second image generated when the ultrasound probe is located at different positions are acquired, wherein a generation time interval of the first image and the second image is smaller than a preset time threshold.
In step S2, an initial overlapping area of the first image and the second image is determined according to the preset mark of the first image and the preset mark of the second image.
In step S3, a target image is determined from the non-artifact region of the first image, the non-artifact region of the second image, and the initial superimposition region.
The positioning of the ultrasonic probe at different positions refers to different heights of the ultrasonic probe from the target detection position, and the preset time threshold may be a short period of time, for example, 0.5 second, 1 second, 1.5 seconds, and the like. The preset markers may include an object, such as a certain organ tissue of the body, which moves at a speed less than the speed of movement of the artifact in the image when the ultrasound probe is moved rapidly. Non-artifact areas are areas in the image where no artifacts are present.
As can be seen from the foregoing steps S1-S3, with the image processing method provided by the embodiment of the present invention, by using the first image and the second image which are generated when the ultrasound probe is located at different positions and whose generation time interval is smaller than the preset time threshold, the initial overlapping region of the first image and the second image is determined according to the preset mark of the first image and the preset mark of the second image, and the target image is determined according to the non-artifact region of the first image, the non-artifact region of the second image, and the initial overlapping region. The method comprehensively utilizes clear parts, namely non-artifact areas, in the two images, obtains a target image without artifacts through superposition processing, can safely and efficiently obtain a high-quality monitoring image without lifting an ultrasonic probe remotely or pressing the skin tightly, and can truly and accurately reflect the condition of body organ tissues because the generation time interval of the first image and the second image is less than a preset time threshold value and the human body state is almost unchanged, thereby being beneficial to improving the treatment effect.
At present, a method for eliminating image artifacts in the prior art is to pre-store a clear image with few artifacts when an ultrasound probe approaches skin, and when a real-time image is currently received, the pre-stored clear image is superimposed with the current real-time image according to position information, that is, a software algorithm is used to fill a corresponding part in the pre-stored clear image into the current real-time image, so that the obtained clear image is clear and free of artifacts, but because the clear image is pre-stored, and because of physiological motion and other reasons, the position of organ tissues may have a small amount of change, and the pre-stored clear image cannot accurately reflect the current organ tissue condition of a body. In contrast, in the method of the embodiment of the invention, the generation time interval of the two images for eliminating the artifact is smaller than the preset time threshold, and both the two images are acquired in real time, so that the condition of the body organ tissue can be truly and accurately reflected, and the treatment effect can be improved.
Specifically, the step of determining the initial overlapping area of the first image and the second image according to the preset mark of the first image and the preset mark of the second image (i.e. step S2) may further include, as shown in fig. 3, the following steps:
in step S21, the preset flag of the first image and the preset flag of the second image are identified.
In step S22, the first image or the second image is moved so that the preset mark of the first image overlaps the preset mark of the second image.
In step S23, a portion in which the first image and the second image are superimposed is determined as an initial superimposition area.
In order to further improve the image quality, the first image and the second image may be preprocessed so that the portions of the first image and the second image that can be superimposed have the same brightness and contrast. Accordingly, in some embodiments, before the step of determining the portion of the first image superimposed on the second image as the initial superimposed area (i.e., step S23), the method may further include the steps of: the first image and the second image are preprocessed so that the portions of the first image and the second image that can be superimposed have the same brightness and contrast.
The initial overlapping area is formed by overlapping a preset mark according to a first image and a preset mark of a second image as a reference, the first image and the second image are generated when the ultrasonic probe is located at different positions, the position of the preset mark in the first image is different from the position of the preset mark in the second image, the first image and the second image cannot be completely overlapped, and the part of the first image and the second image which cannot be overlapped does not have an artifact due to distance usually, so that the first image and the second image both comprise an independent non-artifact area which does not correspond to the initial overlapping area and a non-artifact area which corresponds to the initial overlapping area. The artifact region in the initial overlapping region can be replaced according to the non-artifact regions of the first image and the second image corresponding to the initial overlapping region, and a final overlapping region without the artifact is obtained.
Accordingly, in some embodiments, as shown in fig. 4, the non-artifact regions of the first image and the non-artifact regions of the second image each comprise a separate non-artifact region and a non-artifact region corresponding to the initial overlap region; the step of determining the target image according to the non-artifact region of the first image, the non-artifact region of the second image and the initial overlap region (i.e. step S3) may further include the steps of:
in step S31, the initial superimposition region is replaced with a non-artifact region corresponding to the initial superimposition region, and a final superimposition region is obtained.
In step S32, the independent non-artifact region of the first image, the independent non-artifact region of the second image, and the final superimposition region are stitched to obtain a target image.
Firstly, according to a non-artifact region of the first image corresponding to the initial superposition region and a non-artifact region of the second image corresponding to the initial superposition region, both the artifact region and the non-artifact region in the initial superposition region are subjected to replacement processing, so that no artifact exists in the final superposition region obtained after the processing. Then, the independent non-artifact region which cannot be superposed with the second image in the first image, the independent non-artifact region which cannot be superposed with the first image in the second image and the final superposed region without the artifact are spliced, so that the high-quality target image without the artifact can be obtained.
Specifically, a portion of the initial overlap region corresponding to the artifact region in the first image may also be referred to as an artifact region of the initial overlap region, where the artifact is overlapped from the first image, and a portion of the initial overlap region corresponding to the artifact region in the second image may also be referred to as an artifact region of the initial overlap region, where the artifact is overlapped from the second image. For the part of the initial superimposition region corresponding to the artifact region in the first image, the part of the corresponding position range in the second image is usually a non-artifact region, and this part can be directly replaced by the non-artifact region of the corresponding position range in the second image. For the part of the initial superimposition region corresponding to the artifact region in the second image, the part of the corresponding position range in the first image is usually a non-artifact region, and this part can be directly replaced by the non-artifact region of the corresponding position range in the first image. For the non-artifact region in the initial superimposition region, neither the artifact is superimposed from the first image nor the artifact is superimposed from the second image, theoretically, the portion may be directly replaced by the non-artifact region in the corresponding position range in the first image, or the portion may be directly replaced by the non-artifact region in the corresponding position range in the second image, but in order to further improve the image quality, a target region may be determined from the first image and the second image according to the pixel value of the first image and the pixel value of the second image, and the non-artifact region in the initial superimposition region may be replaced by the target region.
Accordingly, in some embodiments, as shown in fig. 5, the step of performing the replacement processing on the initial overlapping region according to the non-artifact region corresponding to the initial overlapping region (i.e. step S31) may further include the following steps:
in step S311, a portion of the initial superimposition region corresponding to the artifact region in the first image is replaced with a non-artifact region of the corresponding position range in the second image.
In step S312, a portion of the initial superimposition region corresponding to the artifact region in the second image is replaced with a non-artifact region of the corresponding position range in the first image.
In step S313, a target area is determined from the non-artifact areas corresponding to the initial superimposition area according to the pixel values of the first image and the second image, and the non-artifact areas in the initial superimposition area are replaced with the target area.
Fig. 1B is a schematic diagram of image superposition according to an embodiment of the present invention, and referring to fig. 1a and 1B, assuming that the image a is a first image, the image B is a second image, and the predetermined mark is skin, since the a and B images are generated with the ultrasound probe at different positions, the skin at the position of the a image is different from the skin at the position of the B image, the a image is fixed, the B image is moved, so that the skin in the B picture overlaps the skin in the A picture, the artifact areas in the B picture and the artifact areas in the A picture are not overlapped after the skin in the B picture overlaps the skin in the A picture, therefore, the skin in the image A is also the skin in the image B after the superposition, and the artifact area in the image A has a certain distance with the artifact area in the image B after the superposition, for the artifact area part in the A picture, the artifact area part can be replaced by a non-artifact area of the corresponding position range in the B picture; for the artifact region part in the superimposed B-map, it may be replaced by the non-artifact region in the corresponding position range in the a-map, and it should be noted that, in this case, the "corresponding position range" is determined by using the skin in the a-map or the skin in the B-map as the reference, for example, when the artifact region part in the a-map shown in the figure is 3cm away from the skin in the a-map, the artifact region part in the a-map is replaced by the non-artifact region in the same position range 3cm away from the skin in the B-map.
In some embodiments, the first image is generated when the ultrasound probe is at a position closer to the target monitoring position than when the second image is generated, and the step of determining the target region from the non-artifact region corresponding to the initial overlap region according to the pixel values of the first image and the pixel values of the second image (i.e., step S313) may further include the steps of: and under the condition that the difference value between the pixel value of the first image and the pixel value of the second image is smaller than a preset pixel threshold value, determining a target area according to the pixel information of the first image, or determining a target image according to the average value of the pixel information of the first image and the pixel information of the second image.
For the non-artifact region in the initial overlapping region, neither the artifact region is overlapped from the first image, nor the artifact region is overlapped from the second image, the first image has the corresponding non-artifact region, and the second image also has the corresponding non-artifact region, when the pixel value of the first image is not much different from the pixel value of the second image, the non-artifact region in the initial overlapping region can be replaced by directly adopting the pixel information of the corresponding non-artifact region in the first image, or by adopting the average value of the pixel information of the corresponding non-artifact region in the first image and the pixel information of the corresponding non-artifact region in the second image.
In some embodiments, when the pixel values of the first image are different from the pixel values of the second image by a large amount and the non-artifact region in the initial overlap region is close to the skin, the non-artifact region in the initial overlap region may be replaced with the pixel information of the non-artifact region in the second image corresponding to the non-artifact region in the initial overlap region, otherwise the non-artifact region in the initial overlap region may be replaced with the pixel information of the non-artifact region in the first image corresponding to the non-artifact region in the initial overlap region.
Acquiring the first image and the second image generated when the ultrasound probe is located at different positions may be implemented by controlling the ultrasound probe to vibrate up and down, and accordingly, in some embodiments, as shown in fig. 6, the step of acquiring the first image and the second image generated when the ultrasound probe is located at different positions (i.e., step S1) may further include the following steps:
in step S11, the ultrasonic probe is controlled to vibrate in a direction perpendicular to the target position at a preset vibration frequency based on the vibration signal and emit a probe ultrasonic wave.
In step S12, a plurality of images are generated from the ultrasound echoes received by the ultrasound probe.
In step S13, a first image and a second image are determined from the plurality of images based on the vibration signal.
The vibration signal is used for controlling the ultrasonic probe to vibrate in a direction perpendicular to the target position according to a preset vibration frequency, the position of the ultrasonic probe can be judged according to the vibration signal, and the image generated according to the ultrasonic echo received by the ultrasonic probe is almost real-time, so that the first image and the second image generated when the vibration of the ultrasonic probe is in a high position and the second image generated when the vibration of the ultrasonic probe is in a low position can be determined from the plurality of images according to the vibration signal.
In some embodiments, the step of controlling the ultrasonic probe to vibrate in a direction perpendicular to the target position according to the preset vibration frequency and emit the probe ultrasonic wave according to the vibration signal (i.e., step S11) may further include the steps of: and sending a vibration signal to the vibration mechanism so that the vibration mechanism pushes the ultrasonic probe to vibrate in a direction vertical to the target position according to a preset vibration frequency and sends a detection ultrasonic wave.
In some embodiments, the oscillating mechanism is a cam mechanism, the cam mechanism has a cycle angle in the short diameter and a cycle angle in the long diameter both in the interval [90 °, 140 ° ], and the cam mechanism has a cycle angle between the short diameter and the long diameter in the interval [80 °, 180 ° ].
The vibration mechanism drives the ultrasonic probe to vibrate up and down continuously to obtain a plurality of real-time images, the vibration mechanism is designed into a cam mechanism, the cam mechanism is designed into a mode that the period angle when the cam mechanism is located at the short diameter and the period angle when the cam mechanism is located at the long diameter are both located in the interval [90 degrees and 140 degrees ], and then the period angle when the cam mechanism is located between the long diameter and the short diameter is located in the interval [80 degrees and 180 degrees ], so that the vibration of the ultrasonic probe can be ensured to be located at the low position or the high position for most of time, and only a small part of time is located between the low position and the high position.
The following briefly describes the image processing method provided by the present invention with reference to a specific embodiment.
As shown in fig. 7, the working flow of the image processing apparatus provided by the embodiment of the present invention, which is composed of an FUS transducer, an ultrasonic probe, a vibration mechanism, an ultrasonic imaging module, an image acquisition module, an image processing module and a control module, may include the following steps:
the control module transmits a vibration signal to the vibration mechanism to initiate vibration control, so that the vibration mechanism vibrates according to a preset vibration frequency, and meanwhile, the control module transmits the vibration signal to the image processing module.
Wherein, the vibration frequency is set too low to affect the real-time performance, and the vibration frequency is set too high to cause the image blurring, and the preset vibration frequency can be set between 1hz and 20hz and the vibration amplitude can be set between 3mm and 15mm in order to ensure the image quality.
And secondly, the vibration mechanism pushes the ultrasonic probe to vibrate up and down.
Wherein, the vibration mechanism can be composed of a special cam mechanism for pushing the ultrasonic probe to vibrate up and down, and the vibration mechanism can be designed as follows: the percentage of the short diameter (the ultrasonic probe descends to the lowest point) and the percentage of the long diameter (the ultrasonic probe ascends to the highest point) are respectively more than 40%, and the percentage of the transition part (the ultrasonic probe is positioned between the lowest point and the highest point) is controlled within 20%.
And (III) the ultrasonic probe which continuously vibrates up and down transmits and detects ultrasonic waves and receives ultrasonic echoes.
And (IV) the ultrasonic probe transmits the ultrasonic echo to the ultrasonic imaging module.
And (V) the image acquisition module acquires images from the ultrasonic imaging module and transmits the acquired images to the image processing module.
And (VI) the image processing module receives the image transmitted by the image acquisition module in real time and also receives the vibration signal transmitted by the control module in real time, and the position of the ultrasonic probe is judged according to the control signal. From the vibration signal, an image img1 (refer to a diagram a in fig. 1 a) generated when the vibration of the ultrasonic probe is at the high position and an image img2 (refer to a diagram B in fig. 1 a) generated when the vibration of the ultrasonic probe is at the low position are taken, respectively. And (3) respectively finding artifact areas on img1 and img2, removing the artifact areas, and carrying out dislocation and superposition on img1 and img2 according to the upper position and the lower position to form an image which is then displayed.
Wherein, the step of removing the artifact region may further include the steps of:
(1) the vertical distance d1 between the ultrasonic probe and the skin when img1 is generated is determined, the vertical distance d2 between the ultrasonic probe and the skin when img2 is generated is determined, and the difference dis between the two is calculated to be d2-d 1.
(2) Taking img1 as a basic map, moving img2 downwards by dis distances so that the skin line of img2 can be overlapped with the skin line of img1, and calculating an overlapped area (equivalent to intersecting two sectors).
(3) The img1 and img2 are preprocessed so that the superposable regions of img1 and img2 have the same brightness and contrast.
(4) The part formed by overlapping img1 and img2 is determined as an initial overlapping area, the initial overlapping area is compared with img1 and img2, the part corresponding to the artifact area in img1 in the initial overlapping area is replaced by a non-artifact area of a corresponding position range in img2, and the part corresponding to the artifact area in img2 in the initial overlapping area is replaced by a non-artifact area of a corresponding position range in img 1. For the non-artifact region in the initial overlap region, if the difference between the pixel values of img1 and img2 is not large, the pixel information of the non-artifact region in the corresponding position range in img1 may be directly used for replacement, or the pixel information of the non-artifact region in the corresponding position range in img1 and the pixel information of the non-artifact region in the corresponding position range in img2 may be used for replacement. As another embodiment, if the difference in pixel values between img1 and img2 is large and the non-artifact region in the initial superimposition region is close to the skin, it may be replaced with the pixel information of the non-artifact region of the corresponding position range in img2, otherwise it is replaced with the pixel information of the non-artifact region of the corresponding position range in img 1.
In this step, an optical flow method may be adopted to reduce errors caused by slight body displacement, and the drift direction of the artifact may be found.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
Based on the same technical concept, in a second aspect, the embodiment of the invention provides an image processing apparatus. As shown in fig. 8, the image processing apparatus may include:
the acquiring module 101 is configured to acquire a first image and a second image which are generated when the ultrasound probe is located at different positions and whose generation time interval is smaller than a preset time threshold.
The first processing module 102 is configured to determine an initial overlapping area of the first image and the second image according to a preset mark of the first image and a preset mark of the second image.
A second processing module 103, configured to determine a target image according to the non-artifact region of the first image, the non-artifact region of the second image, and the initial overlap region.
In some embodiments, the first processing module 102 is configured to:
identifying a preset mark of the first image and a preset mark of the second image;
moving the first image or the second image such that a preset mark of the first image overlaps a preset mark of the second image;
and determining a part formed by overlapping the first image and the second image as the initial overlapping area.
In some embodiments, the first processing module 102 is further configured to:
and preprocessing the first image and the second image to enable the parts of the first image and the second image which can be superposed to have the same brightness and contrast.
In some embodiments, the non-artifact regions of the first image and the non-artifact regions of the second image each comprise a separate non-artifact region and a non-artifact region corresponding to the initial overlap region; the second processing module 103 is configured to:
replacing the initial overlapping area according to the non-artifact area corresponding to the initial overlapping area to obtain a final overlapping area;
and splicing the independent non-artifact region of the first image, the independent non-artifact region of the second image and the final superposition region to obtain the target image.
In some embodiments, the second processing module 103 is configured to:
replacing a part of the initial superposition region corresponding to an artifact region in the first image with a non-artifact region of a corresponding position range in the second image;
replacing a part of the initial superposition region corresponding to the artifact region in the second image with a non-artifact region of a corresponding position range in the first image;
determining a target area from the non-artifact areas corresponding to the initial superposition area according to the pixel values of the first image and the second image, and replacing the non-artifact areas in the initial superposition area with the target area.
In some embodiments, the first image is generated when the ultrasound probe is at a position closer to a target monitoring position than when the second image is generated, the second processing module 103 to:
and under the condition that the difference value between the pixel value of the first image and the pixel value of the second image is smaller than a preset pixel threshold value, determining the target area according to the pixel information of the first image, or determining the target image according to the average value of the pixel information of the first image and the pixel information of the second image.
In some embodiments, the obtaining module 101 is configured to:
controlling the ultrasonic probe to vibrate in a direction vertical to the target position according to a preset vibration frequency according to the vibration signal and transmitting a detection ultrasonic wave;
generating a plurality of images according to the ultrasonic echoes received by the ultrasonic probe;
and determining the first image and the second image from the plurality of images according to the vibration signal.
In some embodiments, the obtaining module 101 is configured to: and sending a vibration signal to the vibration mechanism so that the vibration mechanism pushes the ultrasonic probe to vibrate in a direction vertical to the target position according to a preset vibration frequency and sends a detection ultrasonic wave.
In some embodiments, the oscillating mechanism is a cam mechanism, the cam mechanism has a cycle angle in the short diameter and a cycle angle in the long diameter both in the interval [90 °, 140 ° ], and the cam mechanism has a cycle angle between the short diameter and the long diameter in the interval [80 °, 180 ° ].
It is to be understood that the invention is not limited to the particular arrangements and instrumentality described in the above embodiments and shown in the drawings. For convenience and brevity of description, detailed description of a known method is omitted here, and for the specific working processes of the system, the module and the unit described above, reference may be made to corresponding processes in the foregoing method embodiments, which are not described herein again.
As shown in fig. 9, an embodiment of the present invention provides an electronic device, which may include:
one or more processors 901;
a memory 902 on which one or more programs are stored, which when executed by one or more processors, cause the one or more processors to implement the image processing method of any one of the above;
one or more I/O interfaces 903 coupled between the processor and the memory and configured to enable information interaction between the processor and the memory.
Among them, the processor 901 is a device with data processing capability, which includes but is not limited to a Central Processing Unit (CPU) or the like; memory 902 is a device having data storage capabilities including, but not limited to, random access memory (RAM, more specifically SDRAM, DDR, etc.), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), FLASH memory (FLASH); an I/O interface (read/write interface) 903 is coupled between the processor 901 and the memory 902 and can enable information interaction between the processor 901 and the memory 902, which includes but is not limited to a data Bus (Bus) and the like.
In some embodiments, the processor 901, memory 902, and I/O interface 903 are connected to each other and to other components of the computing device by a bus.
The present embodiment further provides a computer readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the image processing method provided in the present embodiment, and in order to avoid repeated descriptions, specific steps of the image processing method are not described herein again.
It will be understood by those of ordinary skill in the art that all or some of the steps of the above inventive method, systems, functional modules/units in the apparatus may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Those skilled in the art will appreciate that although some embodiments described herein include some features included in other embodiments instead of others, combinations of features of different embodiments are meant to be within the scope of the embodiments and form different embodiments.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.

Claims (12)

1. An image processing method, wherein the method comprises:
acquiring a first image and a second image generated when an ultrasonic probe is positioned at different positions, wherein the generation time interval of the first image and the second image is less than a preset time threshold;
determining an initial overlapping area of the first image and the second image according to a preset mark of the first image and a preset mark of the second image;
and determining a target image according to the non-artifact region of the first image, the non-artifact region of the second image and the initial superposition region.
2. The method according to claim 1, wherein the step of determining an initial overlapping area of the first image and the second image according to the preset mark of the first image and the preset mark of the second image comprises:
identifying a preset mark of the first image and a preset mark of the second image;
moving the first image or the second image such that a preset mark of the first image overlaps a preset mark of the second image;
and determining a part formed by overlapping the first image and the second image as the initial overlapping area.
3. The method according to claim 2, wherein, prior to the step of determining the portion into which the first image and the second image are superimposed as the initial superimposition area, the method further comprises:
and preprocessing the first image and the second image to enable the parts of the first image and the second image which can be superposed to have the same brightness and contrast.
4. The method of claim 1, wherein the non-artifact regions of the first image and the second image each comprise a separate non-artifact region and a non-artifact region corresponding to the initial overlap region;
the step of determining a target image from the non-artifact region of the first image, the non-artifact region of the second image and the initial overlap region comprises:
replacing the initial overlapping area according to the non-artifact area corresponding to the initial overlapping area to obtain a final overlapping area;
and splicing the independent non-artifact region of the first image, the independent non-artifact region of the second image and the final superposition region to obtain the target image.
5. The method of claim 4, wherein the step of performing the replacement processing on the initial overlapping region according to the non-artifact region corresponding to the initial overlapping region comprises:
replacing a part of the initial superposition region corresponding to an artifact region in the first image with a non-artifact region of a corresponding position range in the second image;
replacing a part of the initial superposition region corresponding to the artifact region in the second image with a non-artifact region of a corresponding position range in the first image;
determining a target area from the non-artifact areas corresponding to the initial superposition area according to the pixel values of the first image and the second image, and replacing the non-artifact areas in the initial superposition area with the target area.
6. The method of claim 5, wherein the first image is generated while the ultrasound probe is at a position closer to a target monitoring location than the second image is generated, the determining a target region from the non-artifact regions corresponding to the initial overlap region based on pixel values of the first image and pixel values of the second image comprising:
and under the condition that the difference value between the pixel value of the first image and the pixel value of the second image is smaller than a preset pixel threshold value, determining the target area according to the pixel information of the first image, or determining the target image according to the average value of the pixel information of the first image and the pixel information of the second image.
7. The method of any of claims 1-6, wherein the step of acquiring the first and second images generated while the ultrasound probe is at different locations comprises:
controlling the ultrasonic probe to vibrate in a direction vertical to the target position according to a preset vibration frequency according to the vibration signal and transmitting a detection ultrasonic wave;
generating a plurality of images according to the ultrasonic echoes received by the ultrasonic probe;
and determining the first image and the second image from the plurality of images according to the vibration signal.
8. The method of claim 7, wherein the step of controlling the ultrasonic probe to vibrate in a direction perpendicular to the target position at a preset vibration frequency and emit a probe ultrasonic wave according to the vibration signal comprises:
and sending the vibration signal to a vibration mechanism so that the vibration mechanism pushes the ultrasonic probe to vibrate in a direction vertical to the target position according to the preset vibration frequency and sends a detection ultrasonic wave.
9. The method of claim 8, wherein the oscillating mechanism is a cam mechanism, the cam mechanism having a cycle angle in the short diameter and a cycle angle in the long diameter both in the interval [90 °, 140 ° ], and the cam mechanism having a cycle angle between the short diameter and the long diameter in the interval [80 °, 180 ° ].
10. An image processing apparatus, wherein the image processing apparatus comprises:
the acquisition module is used for acquiring a first image and a second image which are generated when the ultrasonic probe is positioned at different positions and the generation time interval of which is smaller than a preset time threshold;
the first processing module is used for determining an initial overlapping area of the first image and the second image according to a preset mark of the first image and a preset mark of the second image;
and the second processing module is used for determining a target image according to the non-artifact region of the first image, the non-artifact region of the second image and the initial superposition region.
11. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the image processing method according to any one of claims 1 to 9;
one or more I/O interfaces connected between the processor and the memory and configured to enable information interaction between the processor and the memory.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 9.
CN202111663348.5A 2021-12-30 2021-12-30 Image processing method, image processing device, electronic equipment and computer readable medium Pending CN114287963A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111663348.5A CN114287963A (en) 2021-12-30 2021-12-30 Image processing method, image processing device, electronic equipment and computer readable medium
PCT/CN2022/139412 WO2023125058A1 (en) 2021-12-30 2022-12-15 Image processing method and apparatus, electronic device and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111663348.5A CN114287963A (en) 2021-12-30 2021-12-30 Image processing method, image processing device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN114287963A true CN114287963A (en) 2022-04-08

Family

ID=80973442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111663348.5A Pending CN114287963A (en) 2021-12-30 2021-12-30 Image processing method, image processing device, electronic equipment and computer readable medium

Country Status (2)

Country Link
CN (1) CN114287963A (en)
WO (1) WO2023125058A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012452A (en) * 2023-03-28 2023-04-25 天津舞影猫科技有限公司 Puncture navigation system and method for positioning target object based on ultrasonic image
WO2023125058A1 (en) * 2021-12-30 2023-07-06 重庆海扶医疗科技股份有限公司 Image processing method and apparatus, electronic device and computer readable medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117428782A (en) * 2023-12-04 2024-01-23 南开大学 Micro-nano target sound wave operation method and sound wave operation platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101396287A (en) * 2007-09-28 2009-04-01 株式会社东芝 Ultrasound diagnosis apparatus and program
JP2010029281A (en) * 2008-07-25 2010-02-12 Aloka Co Ltd Ultrasonic diagnostic apparatus
CN103123721A (en) * 2011-11-17 2013-05-29 重庆海扶医疗科技股份有限公司 Method and device for reducing artifacts in image in real time
CN112150571A (en) * 2020-09-30 2020-12-29 上海联影医疗科技股份有限公司 Image motion artifact eliminating method, device, equipment and storage medium
CN113379664A (en) * 2021-06-23 2021-09-10 青岛海信医疗设备股份有限公司 Method for enhancing ultrasonic puncture needle in ultrasonic image, ultrasonic device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10540769B2 (en) * 2017-03-23 2020-01-21 General Electric Company Method and system for enhanced ultrasound image visualization by detecting and replacing acoustic shadow artifacts
US10915992B1 (en) * 2019-08-07 2021-02-09 Nanotronics Imaging, Inc. System, method and apparatus for macroscopic inspection of reflective specimens
CN111524067B (en) * 2020-04-01 2023-09-12 北京东软医疗设备有限公司 Image processing method, device and equipment
CN114287963A (en) * 2021-12-30 2022-04-08 重庆海扶医疗科技股份有限公司 Image processing method, image processing device, electronic equipment and computer readable medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101396287A (en) * 2007-09-28 2009-04-01 株式会社东芝 Ultrasound diagnosis apparatus and program
JP2010029281A (en) * 2008-07-25 2010-02-12 Aloka Co Ltd Ultrasonic diagnostic apparatus
CN103123721A (en) * 2011-11-17 2013-05-29 重庆海扶医疗科技股份有限公司 Method and device for reducing artifacts in image in real time
CN112150571A (en) * 2020-09-30 2020-12-29 上海联影医疗科技股份有限公司 Image motion artifact eliminating method, device, equipment and storage medium
CN113379664A (en) * 2021-06-23 2021-09-10 青岛海信医疗设备股份有限公司 Method for enhancing ultrasonic puncture needle in ultrasonic image, ultrasonic device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023125058A1 (en) * 2021-12-30 2023-07-06 重庆海扶医疗科技股份有限公司 Image processing method and apparatus, electronic device and computer readable medium
CN116012452A (en) * 2023-03-28 2023-04-25 天津舞影猫科技有限公司 Puncture navigation system and method for positioning target object based on ultrasonic image

Also Published As

Publication number Publication date
WO2023125058A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
CN114287963A (en) Image processing method, image processing device, electronic equipment and computer readable medium
JP7268087B2 (en) Image capture guidance using model-based segmentation
CN1748650B (en) Method for extending an ultrasound image field of view
US9603579B2 (en) Three-dimensional (3D) ultrasound system for scanning object inside human body and method for operating 3D ultrasound system
CN104379064B (en) The bearing calibration of diagnostic ultrasound equipment and view data
KR100875208B1 (en) High intensity focus ultrasound system
JP6576947B2 (en) Ultrasound imaging system and method for tracking specular reflectors
JP2010501214A (en) High intensity focused ultrasound therapy system guided by an imaging device
CN102133110A (en) ULTRASONIC DIAGNOSTIC APPARATUS and MEDICAL IMAGE DIAGNOSTIC APPARATUS
JP2008229342A (en) Ultrasound system and method for forming ultrasound image
KR20140095848A (en) Method and system for ultrasound treatment
JP2021536276A (en) Identification of the fat layer by ultrasound images
CN101491449A (en) Contrast agent destruction effectiveness determination for medical diagnostic ultrasound imaging
US10537305B2 (en) Detecting amniotic fluid position based on shear wave propagation
CN107440720A (en) The bearing calibration of diagnostic ultrasound equipment and view data
US20140276062A1 (en) Ultrasound diagnostic device and method for controlling ultrasound diagnostic device
US20120123249A1 (en) Providing an optimal ultrasound image for interventional treatment in a medical system
CN113509209A (en) Ophthalmologic ultrasonic imaging method and device
US20110282202A1 (en) Display system and method of ultrasound apparatus
US20150105658A1 (en) Ultrasonic imaging apparatus and control method thereof
US20120053462A1 (en) 3d ultrasound system for providing beam direction and method of operating 3d ultrasound system
US11419585B2 (en) Methods and systems for turbulence awareness enabled ultrasound scanning
Loizou Ultrasound image analysis of the carotid artery
WO2019205006A1 (en) Ultrasound imaging method and ultrasound imaging device
CN105828877A (en) Device for treatment of a tissue and method of preparation of an image of an image-guided device for treatment of a tissue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination