WO2019179344A1 - Multi-sensor information fusion-based three-dimensional ultrasound imaging method, device and terminal machine-readable storage medium - Google Patents

Multi-sensor information fusion-based three-dimensional ultrasound imaging method, device and terminal machine-readable storage medium Download PDF

Info

Publication number
WO2019179344A1
WO2019179344A1 PCT/CN2019/078034 CN2019078034W WO2019179344A1 WO 2019179344 A1 WO2019179344 A1 WO 2019179344A1 CN 2019078034 W CN2019078034 W CN 2019078034W WO 2019179344 A1 WO2019179344 A1 WO 2019179344A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
pose
position information
camera
set
Prior art date
Application number
PCT/CN2019/078034
Other languages
French (fr)
Chinese (zh)
Inventor
高毅
朱良家
余夏夏
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201810232253.X priority Critical
Priority to CN201810232253.XA priority patent/CN108403146B/en
Application filed by 深圳大学 filed Critical 深圳大学
Publication of WO2019179344A1 publication Critical patent/WO2019179344A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • A61B8/4254Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors mounted on the probe
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data

Abstract

A multi-sensor information fusion-based three-dimensional ultrasound imaging method, a device and a terminal machine-readable storage medium, the multi-sensor information fusion-based three-dimensional ultrasound imaging method comprising: first, obtaining first posture information of an ultrasonic probe (101) by using a sensor group, the first posture information being obtained by the sensor group according to collected linear and rotational acceleration information, the sensor group being disposed on the ultrasonic probe (101), and the sensor group at least comprising two types of devices, an acceleration sensor and a gyroscope; second, obtaining second posture information of the ultrasonic probe (101) by using cameras (201, 202, 203, 204), the second posture information being obtained by the cameras (201, 202, 203, 204) according to collected panoramic scenes, wherein the cameras (201, 202, 203, 204) are disposed on the ultrasonic probe (101); then, obtaining posture information of each frame of an ultrasonic image by using an optimal estimation filtering method, and reconstructing a three-dimensional image of each frame of the ultrasonic image according to an interpolation method; thus, the combined use of the sensor group and the cameras (201, 202, 203, 204) is achieved on the ultrasonic probe (101), and the quality in three-dimensional ultrasonic imaging technology is improved.

Description

Three-dimensional ultrasound imaging method, device and terminal machine readable storage medium based on multi-sensor information fusion

Cross-reference to related applications

The present application claims priority to Chinese Patent Application No. 201 810 232 253, filed on March 20, 2018, entitled "Three-Dimensional Ultrasound Imaging Method and Apparatus Based on Multi-Sensor Information Fusion", the entire contents of which are incorporated by reference. In this application.

Technical field

The present application relates to the field of ultrasonic imaging technologies, and in particular, to a three-dimensional ultrasound imaging method, apparatus, terminal, and machine readable storage medium based on multi-sensor information fusion.

Background technique

In recent years, the application of three-dimensional ultrasound imaging technology has become more and more extensive. The reason is that compared with the traditional two-dimensional image, three-dimensional ultrasound imaging eliminates the need for doctors to synthesize multiple two-dimensional images based on experience to understand the process of three-dimensional anatomical structure, and that the three-dimensional ultrasound imaging effect is intuitive and clinical. The value is relatively large, so it is concerned by researchers and medical workers.

At present, the common three-dimensional ultrasound imaging technology is mainly divided into two categories: the first one is to use three-dimensional electronic phased array and other methods to obtain three-dimensional data without moving the probe, and to image immediately or in real time; The trajectory of the probe in the three-dimensional space is determined by using the position sensor of the spatial position sensor, thereby determining the spatial coordinates and image orientation of the obtained two-dimensional image per frame, and performing three-dimensional reconstruction on the scanned structure. Among them, the second method for performing three-dimensional reconstruction specifically includes four cases to locate each frame of the two-dimensional image scanned: First, an electromagnetic sensor. Set an external magnetic field, place the probe in this magnetic field, measure the strength and direction of the magnetic field through the tiny coil on the probe, and determine the position and direction of the six degrees of freedom of the probe. However, such probes may have a limited range of single scans, and are not suitable for single-use large-scale composite scanning scans, such as one-time overall scanning imaging of larger organs (such as the liver) or lesions. Second, external positioning (visual) device. An external T-type dual camera arm is attached and a special visual marker such as a silver reflective ball or a black and white circular template is loaded on the ultrasonic probe. An estimate of six degrees of freedom is obtained by monitoring the position of the marker by the camera. But you need to "stare" at the markers on the probe. However, the probe is free to operate with the doctor and is often prone to occlusion, making it impossible to track. Third, the mechanical moving probe. However, the probe is driven by the mechanical device to sweep the space area in a fan shape or in a rotating manner. Because it is controlled by the machine, the position and orientation of each frame can be obtained. The scope of the scan is very limited and can only cover the range that the mechanical scanning device can cover. For special scenes such as intravascular ultrasound, such devices can scan the blood vessels well along a fixed axis for a week, but for other free scans, this device solution does not meet the need for free scanning. Fourth, handheld three-dimensional ultrasound system. There is no special positioning device in the system, and only the operator's method is used to keep the scanning smooth and uniform, and it is assumed that each frame of image is equidistantly shifted. It is only by the operator's method to keep the scan smooth and uniform, and assume that each frame of image is equidistantly shifted. Not only is the stability of the scanner's technique high, but it is also required that the object to be scanned must be a flat surface. The surface of the human body is mostly a curved surface and cannot satisfy this condition.

In summary, there is currently no effective solution to the problem of poor performance of the system in three-dimensional ultrasound imaging technology.

Summary of the invention

In view of this, one of the objectives of the embodiments of the present application is to provide a three-dimensional ultrasound imaging method and apparatus based on multi-sensor information fusion, and improve the stability and quality of the three-dimensional ultrasound imaging technology by setting a sensor group and a camera on the ultrasonic probe. .

In a first aspect, the embodiment of the present application provides a three-dimensional ultrasound imaging method based on multi-sensor information fusion, comprising: acquiring a first pose information of an ultrasound probe by using a sensor group, wherein the first pose information is obtained by the sensor group according to the collected information. Wherein the sensor group is disposed on the ultrasonic probe, and the sensor group includes at least an inertial guidance sensor;

Obtaining a second pose information of the ultrasonic probe by using a camera, and the second pose information is obtained by the camera according to the collected panoramic scene, wherein the camera is disposed on the ultrasonic probe;

Obtaining the pose information of each frame of the ultrasound image by using the optimal estimation filtering method;

A three-dimensional image of each frame of the ultrasound image is reconstructed according to an interpolation method.

Optionally, in the foregoing method, after obtaining the pose information of each frame of the ultrasound image by using an optimal estimation filtering method, the method further includes:

Calculating the similarity between the three-dimensional image at the current moment and the three-dimensional image at the previous moment;

Generating correction information when the similarity is greater than a similar threshold set in advance;

Determining whether the similarity between the first pose information and the second pose information is smaller than the pose estimate corresponding to the correction information;

When the judgment result is no, the three-dimensional image of each frame of the ultrasound image is reconstructed according to the interpolation method;

When the determination result is YES, both the first pose information and the second pose information are subjected to pose correction.

Optionally, in the above method, the second pose information of the ultrasonic probe is acquired by using the camera, and the second pose information is obtained by the camera according to the collected panoramic scene, wherein the camera is disposed on the ultrasonic probe, and the method includes:

The first information is collected by the first camera disposed on the ultrasonic probe, and the position information of the first camera corresponding to the first information is calculated by using the pose estimation algorithm, and the first set of position information and the first position of the ultrasonic probe are obtained according to the position information. a set of angle information, wherein the first set of position information includes position information of three degrees of freedom, and the first set of angle information includes angle information of three degrees of freedom;

The second information is collected by the second camera disposed on the back side of the ultrasonic probe, and the position information of the second camera corresponding to the second information is calculated by using the pose estimation algorithm, and the second set of position information of the ultrasonic probe is obtained according to the position information. a second set of angle information, wherein the second set of position information includes position information of three degrees of freedom, and the second set of angle information includes angle information of three degrees of freedom;

The third information is collected by the third camera disposed on the ultrasonic probe handle, and the position information of the third camera corresponding to the third information is calculated by using the pose estimation algorithm, and the third set of position information of the ultrasonic probe is obtained according to the position information. a third set of angle information, wherein the third set of position information includes position information of three degrees of freedom, and the third set of angle information includes angle information of three degrees of freedom;

The fourth information is collected by the fourth camera disposed on the back side of the ultrasonic probe handle, and the position information of the fourth camera corresponding to the fourth information is calculated by using the pose estimation algorithm, and the fourth set of position information of the ultrasonic probe is obtained according to the position information. And a fourth set of angle information, wherein the fourth set of position information includes position information of three degrees of freedom, and the fourth set of angle information includes angle information of three degrees of freedom;

And splicing the first group position information, the first group angle information, the second group position information, the second group angle information, the third group position information, the third group angle information, the fourth group position information, and the fourth group angle information into The second pose information.

Optionally, in the above method, the acquiring the second pose information of the ultrasound probe by using the camera includes:

Calculating spatial position information of each camera by using a pose estimation algorithm according to information collected by each camera;

Obtaining position information and angle information of the ultrasonic probe as the second pose information according to relative position information of each camera on the ultrasonic probe and the spatial position information of each camera.

Optionally, in the above method, the pose estimation algorithm comprises a synchronous positioning and map construction algorithm or a visual inertia mileage calculation method.

Optionally, in the foregoing method, after reconstructing the three-dimensional image of each frame of the ultrasound image according to the interpolation method, the method further includes:

Feeding the three-dimensional image into the positioner of the ultrasound probe;

The first pose information and the three-dimensional image of the next frame of the ultrasound image are optimized.

Optionally, in the above method, the inertial guidance sensor comprises a first inertial guidance chip and a second inertial guidance chip, and the first inertial guidance chip is disposed on the ultrasonic probe, and the second inertial guidance chip is disposed on the back of the ultrasonic probe handle side.

In a second aspect, the embodiment of the present application provides a three-dimensional ultrasound imaging apparatus based on multi-sensor information fusion, including:

The sensing acquisition module is configured to acquire the first pose information of the ultrasonic probe by using the sensor group, wherein the first pose information is obtained by the sensor group according to the collected linear and angular acceleration information, wherein the sensor group is disposed on the ultrasonic probe, and the sensor The group includes at least an inertial guidance sensor;

The camera acquisition module is configured to acquire the second pose information of the ultrasound probe by using the camera, and the second pose information is obtained by the camera according to the collected panoramic scene, wherein the camera is disposed on the ultrasound probe;

a filtering module configured to obtain pose information of each frame of the ultrasound image by using an optimal estimation filtering method;

A reconstruction module configured to reconstruct a three-dimensional image of each frame of the ultrasound image according to an interpolation method.

Optionally, in the foregoing apparatus, the method further includes:

a similarity calculation module configured to calculate a similarity between the three-dimensional image at the current moment and the three-dimensional image at the previous moment;

a similarity comparison module configured to generate correction information when the similarity is greater than a similar threshold set in advance;

The similarity determining module is configured to determine whether the similarity between the first pose information and the second pose information is smaller than the pose estimate corresponding to the correction information;

Negating the execution module, configured to determine whether the result is negative, reconstructing a three-dimensional image of each frame of the ultrasound image according to the interpolation method;

The affirmative execution module is configured to perform posture correction on both the first pose information and the second pose information when the judgment result is YES.

Optionally, in the foregoing device, the camera acquiring module is specifically configured to:

The first information is collected by the first camera disposed on the ultrasonic probe, and the position information of the first camera corresponding to the first information is calculated by using a pose estimation algorithm, and the ultrasonic probe is obtained according to the position information. a set of position information and a first set of angle information, wherein the first set of position information includes position information of three degrees of freedom, and the first set of angle information includes angle information of three degrees of freedom;

The second information is collected by the second camera disposed on the back side of the ultrasonic probe, and the position information of the second camera corresponding to the second information is calculated by using the pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a second set of position information and a second set of angle information, wherein the second set of position information includes position information of three degrees of freedom, and the second set of angle information includes angle information of three degrees of freedom;

The third information is collected by the third camera disposed on the ultrasonic probe handle, and the position information of the third camera corresponding to the third information is calculated by using a pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a third set of position information and a third set of angle information, wherein the third set of position information includes position information of three degrees of freedom, and the third set of angle information includes angle information of three degrees of freedom;

The fourth information is collected by the fourth camera disposed on the back side of the ultrasonic probe handle, and the position information of the fourth camera corresponding to the fourth information is calculated by using the pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a fourth set of position information and a fourth set of angle information, wherein the fourth set of position information includes position information of three degrees of freedom, and the fourth set of angle information includes angle information of three degrees of freedom;

The first group of location information, the first group of angle information, the second group of location information, the second group of angle information, the third group of location information, the third group of angle information, The fourth set of position information and the fourth set of angle information are stitched into the second pose information.

Optionally, in the above device, the camera acquisition module is configured to calculate spatial position information of each camera by using a pose estimation algorithm according to information collected by each camera; and can perform ultrasound on the camera according to each camera. The relative position information on the probe and the spatial position information of each camera are calculated to obtain position information and angle information of the ultrasonic probe as the second pose information.

Optionally, in the above apparatus, the pose estimation algorithm comprises a synchronous positioning and map construction algorithm or a visual inertia mileage calculation method.

Optionally, in the foregoing apparatus, the method further includes:

a feedback module configured to feed the three-dimensional image into the positioner of the ultrasonic probe;

The optimization module is configured to optimize the first pose information and the three-dimensional image of the next frame of the ultrasound image.

In a third aspect, the embodiment of the present application further provides a terminal, including a memory and a processor, where the memory is configured to store a program supporting the processor to perform the multi-sensor information fusion based three-dimensional ultrasound imaging method provided by the foregoing aspect, where the processor is configured to Used to execute programs stored in memory.

In a fourth aspect, the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, performs the steps of the method of any of the above.

The multi-sensor information fusion based three-dimensional ultrasound imaging method, apparatus, and terminal machine-readable storage medium provided by the embodiments of the present application, wherein the multi-sensor information fusion-based three-dimensional ultrasound imaging method comprises: first, using an sensor group to acquire an ultrasound probe The first pose information needs to be described here. The sensor group is disposed on the ultrasonic probe. During the implementation, the sensor group includes at least an inertial guidance sensor to comprehensively consider the environment in which the ultrasonic probe is located through different angles. And obtaining the first pose information, and secondly, acquiring the second pose information of the ultrasonic probe by using the camera, and the second pose information is obtained by the camera according to the collected panoramic scene, and during the implementation, the camera is set in the ultrasound On the probe, after the optimal estimation filtering method is used to obtain the pose information of each frame of the ultrasonic image, that is, the ultrasonic image is obtained by optimally filtering the first pose information and the second pose information. Pose information, and finally, reconstruct each frame of the ultrasound image according to the interpolation method Image processing by the above method, effectively solve occlusion problem in the ultrasound imaging process will occur, and the limited range of the image instability problems, which largely improve the performance of the three-dimensional ultrasound imaging.

Other features and advantages of the present application will be set forth in the description which follows and become apparent from the description. The objectives and other advantages of the present invention are realized and attained by the structure of the invention.

The above described objects, features, and advantages of the present invention will become more apparent from the following description.

DRAWINGS

In order to more clearly illustrate the specific embodiments of the present application or the technical solutions in the prior art, the drawings to be used in the specific embodiments or the description of the prior art will be briefly described below, and obviously, the attached in the following description The drawings are some embodiments of the present application, and those skilled in the art can obtain other drawings based on these drawings without any creative work.

1 is a first flowchart of a three-dimensional ultrasound imaging method based on multi-sensor information fusion provided by an embodiment of the present application;

2 is a second flowchart of a three-dimensional ultrasound imaging apparatus based on multi-sensor information fusion provided by an embodiment of the present application;

3 is a schematic structural diagram of a three-dimensional ultrasound imaging apparatus based on multi-sensor information fusion provided by an embodiment of the present application;

FIG. 4 is a structural connection diagram of a three-dimensional ultrasound imaging apparatus based on multi-sensor information fusion provided by an embodiment of the present application.

Icon: 101-ultrasound probe; 102-ultrasound probe handle; 103-ultrasound probe cable; 104-ultrasound probe probe surface; 201-first camera; 202-second camera; 203-fourth camera; 204-third camera 301-first inertial guidance chip; 302-second inertial guidance chip; 401-sensing acquisition module; 402-camera acquisition module; 403-filter module; 404-reconstruction module.

detailed description

The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the drawings in the embodiments of the present application. It is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in various different configurations. The detailed description of the embodiments of the present application, which is set forth in the claims All other embodiments obtained by a person skilled in the art based on the embodiments of the present application without creative efforts are within the scope of the present application.

At present, the following four methods of three-dimensional reconstruction are mainly included in the ultrasonic imaging technology: First, the position and direction of the six degrees of freedom of the probe are determined by an electromagnetic sensor. However, this type of probe may have a limited range of single scans and is not suitable for one-time large-scale composite scanning. The second is an external positioning device to monitor the position of the marker to obtain an estimate of six degrees of freedom. But you need to "stare" at the markers on the probe. However, the probe is free to operate with the doctor and is often prone to occlusion, making it impossible to track. The third is to obtain the position and orientation of each frame by mechanically moving the probe. However, for special scenes such as intravascular ultrasound, the need for free scanning is still not met. Fourth, through the hand-held system, that is, relying solely on the operator's method to maintain a smooth and uniform scanning, in this case, not only the requirements of the scanner's manual stability are high, but also the scanning object must be a flat surface. However, the surface of the human body mostly belongs to the curved surface, which cannot meet the condition. In summary, the existing use situation may result in poor performance of the three-dimensional ultrasonic imaging technology.

Based on this, the embodiment of the present application provides a three-dimensional ultrasound imaging method and apparatus based on multi-sensor information fusion, which will be described below by way of embodiments.

Referring to FIG. 1 , FIG. 2 and FIG. 3 , the three-dimensional ultrasound imaging method based on multi-sensor information fusion proposed in this embodiment specifically includes the following steps:

Step S101: Acquire the first pose information of the ultrasonic probe by using the sensor group. In this embodiment, the sensor group is disposed on the ultrasonic probe, and the sensor group includes at least an inertial guidance sensor, that is, different types of sensors are simultaneously disposed on the ultrasonic probe. The position and posture of the ultrasound probe are considered from different angles. The application of the above sensor group greatly reduces the requirements of the system for the use environment, and the positioning of the probe posture is not required to install the external positioning module, which can significantly improve the portability of the system.

It should be noted that the inertial guidance sensor includes a first inertial guidance chip and a second inertial guidance chip, and the first inertial guidance chip is disposed on the ultrasonic probe, and the second inertial guidance chip is disposed on the back side of the ultrasonic probe handle . Specifically, the first inertial guiding chip 301 and the second inertial guiding chip 302 can provide position information and attitude information of the ultrasonic probe in real time.

Step S102: Acquire the second pose information of the ultrasonic probe by using the camera, and the second pose information is obtained by the camera according to the collected panoramic scene. In the embodiment, the camera is disposed on the ultrasonic probe. During use, the camera usually selects a high-definition camera, and the panoramic scene in which the ultrasonic probe is located is collected by the camera to obtain the second pose information of the ultrasonic probe.

In addition, it needs to be explained that the active information fusion of the positioning of the ultrasonic probe and the three-dimensional reconstruction further improves the accuracy and robustness of the system. In this method, the probe positioning and 3D reconstruction modules are two completely independent modules compared to the existing passive 3D reconstruction mode. In the method, the result of the three-dimensional reconstruction can be fed back into the probe locator, and the information of the sensor group and the result of the three-dimensional reconstruction are systematically optimized, thereby improving the robustness and precision of the system.

Step S103: Obtaining pose information of each frame of the ultrasound image by using an optimal estimation filtering method.

In this embodiment, the method of optimal estimation filtering is used to obtain the pose information of the ultrasonic probe when acquiring the ultrasonic image of each frame according to the first pose information and the second pose information of each frame of the ultrasound image.

Step S104: reconstructing a three-dimensional image of each frame of the ultrasound image according to the interpolation method. That is to say, in the method, the ultrasound image of different frames can be spliced into a three-dimensional volume data by a fast and efficient interpolation algorithm.

In this embodiment, according to each frame of the ultrasound image and the pose information of the ultrasound probe when acquiring the ultrasound image of each frame, an interpolation algorithm is used to fuse the plurality of ultrasound images to obtain three-dimensional volume data.

Because of the limitation of the principle of inertial guidance, the position information it provides will drift, that is, as the monitoring time becomes longer and longer, the error of the position signal may become larger and larger. Optionally, the above steps are described in detail and In addition, after obtaining the pose information of each frame of the ultrasound image by using the optimal estimation filtering method, the method further includes:

(1) Calculate the similarity between the three-dimensional image at the current time and the three-dimensional image at the previous moment. Because, in the actual use of the ultrasound probe scanning process, often return to the previously explored position and repeated observation. The two-dimensional ultrasound image at this time has a very large similarity to the ultrasound image at the same position before the moment. In the method, the similarity between the images is further used to feed back to the positioning system, that is, when the image appears to have great similarity with an image before the moment, the poses corresponding to the probes at the two moments should be very similar. .

(2) When the similarity is greater than a similar threshold set in advance, correction information is generated. Correspondingly, when the similarity is greater than a similar threshold set in advance, it is determined that the two frames of images are very similar, and correction information is generated.

(3) determining whether the similarity between the first pose information and the second pose information is smaller than the pose estimate corresponding to the correction information. That is, the above correction information is applied to correct the first pose estimation, thereby further obtaining an accurate estimation of the pose.

(4) When the judgment result is No, the three-dimensional image of each frame of the ultrasonic image is reconstructed according to the interpolation method. That is, when the similarity between the first pose information and the second pose information is greater than or equal to the pose estimate corresponding to the correction information, the three-dimensional image of each frame of the ultrasound image is reconstructed according to the interpolation method.

(5) When the judgment result is YES, both the first pose information and the second pose information are subjected to pose correction. That is, when the similarity between the first pose information and the second pose information is less than the pose estimate corresponding to the correction information, both the first pose information and the second pose information are posture corrected so that the obtained result is obtained. More precise.

It is to be noted that the second pose information of the ultrasonic probe is obtained by using the camera, and the second pose information is obtained by the camera according to the collected panoramic scene, wherein the camera is disposed on the ultrasonic probe, including:

The first thing to note is that in theory, only one camera is needed to obtain the 6 degrees of freedom pose information of the ultrasound probe. However, in this method, four cameras are set because: one or more cameras may be occluded during the inspection. Even so, the scene data provided by other cameras can also provide visual information. In addition, according to the way in which the probe is held by the doctor during the examination, it can be inferred that the possibility that the four cameras are simultaneously blocked is very small. So we can always get a stable video signal. Second, the information obtained between multiple video signals can be fused and cross-validated, which improves the positioning accuracy.

In the method, the mounting positions of the four cameras are as shown in FIG. 3, and the cameras 201-204 provide video information in real time to monitor the surrounding scene of the ultrasonic probe. The real-time algorithm uses the pose estimation algorithm from the acquired video signal to reverse the position of the four cameras. The position and orientation of the probe can be determined from four positions. In this way, the two-dimensional ultrasound image of each frame acquired by the ultrasound probe is given a position of 6 degrees of freedom (position 3 degrees of freedom, such as three-dimensional coordinates in space) and angle (3 degrees of freedom of the pose, such as rigid body Euler The information of the pitch angle, the roll angle and the deflection angle in the angle may specifically include the following steps:

(1) acquiring first information by a first camera disposed on the ultrasonic probe, and calculating a position information of the first camera corresponding to the first information by using a pose estimation algorithm, and acquiring a first group position of the ultrasonic probe according to the position information The information and the first set of angle information, wherein the first set of position information includes position information of three degrees of freedom, and the first set of angle information includes angle information of three degrees of freedom. It should be added that there are various pose estimation algorithms, such as Simultaneous Localization And Mapping (SLAM), Visual Inertial Odometry (VIO), and the like.

(2) collecting the second information by the second camera disposed on the back side of the ultrasonic probe, and using the pose estimation algorithm to calculate the position information of the second camera corresponding to the second information, and acquiring the second group of the ultrasonic probe according to the position information The position information and the second set of angle information, wherein the second set of position information includes position information of three degrees of freedom, and the second set of angle information includes angle information of three degrees of freedom.

(3) collecting the third information by the third camera disposed on the handle of the ultrasonic probe, and using the pose estimation algorithm to calculate the position information of the third camera corresponding to the third information, and acquiring the third group of the ultrasonic probe according to the position information The position information and the third set of angle information, wherein the third set of position information includes position information of three degrees of freedom, and the third set of angle information includes angle information of three degrees of freedom.

(4) The fourth information is collected by the fourth camera disposed on the back side of the ultrasonic probe handle, and the position information of the fourth camera corresponding to the fourth information is calculated by using the pose estimation algorithm, and the fourth ultrasonic probe is obtained according to the position information. The group position information and the fourth group angle information, wherein the fourth group position information includes position information of three degrees of freedom, and the fourth group angle information includes angle information of three degrees of freedom.

(5) The first group position information, the first group angle information, the second group position information, the second group angle information, the third group position information, the third group angle information, the fourth group position information, and the fourth group angle The information is stitched into the second pose information.

In this embodiment, the spatial position information of each camera may be calculated by using a pose estimation algorithm according to information collected by each camera. Moreover, since the relative positions of the cameras on the ultrasonic probe are fixed, the second pose information of the ultrasonic probe can be calculated by combining the relative position information of each camera on the ultrasonic probe and the spatial position information of each camera. The posture information includes position information of the ultrasonic probe and angle information in the space.

It can be seen that in the method, four cameras monitor the surrounding panorama of the ultrasonic probe instead of monitoring a specific position, thereby greatly improving the accuracy and robustness of the calculation. In the prior art, two external cameras of about half a meter apart are used to monitor the posture of the probe (6 degrees of freedom) in real time. However, during use, the probe is inevitably blocked by the user or the inspector, and the camera cannot see the probe, and accurate positioning cannot be obtained. This method greatly improves the positioning accuracy.

In addition, in order to ensure that the obtained three-dimensional image is as accurate as possible, after reconstructing the three-dimensional image of each frame of the ultrasound image according to the interpolation method in the method, the method further includes:

Step S201: feeding back the three-dimensional image into the positioner of the ultrasonic probe. That is, a three-dimensional image of each reconstructed ultrasound image is stored in the locator of the ultrasound probe, and the purpose of this feedback is to compare with the next frame.

Step S202: Optimizing the first pose information and the three-dimensional image of the next frame of the ultrasound image, that is, correcting the first pose information of the next frame of the ultrasound image by using the pre-stored reconstructed three-dimensional image in advance, It is guaranteed that the first pose information of the next frame of the ultrasound image does not have a large error.

In summary, the three-dimensional ultrasound imaging method based on multi-sensor information fusion provided by the embodiment includes: firstly, acquiring the first pose information of the ultrasonic probe by using the sensor group, it is required that the sensor group is disposed on the ultrasonic probe. The sensor group includes at least an inertial guidance sensor, that is, a plurality of sensors are used to monitor the first pose information of the ultrasonic probe, and secondly, the second pose information of the ultrasonic probe is acquired by the camera, and the second pose information is collected by the camera according to the camera. Obtained in the panoramic scene, in this embodiment, the camera is disposed on the ultrasonic probe, and then the pose information of each frame of the ultrasonic image is obtained by using an optimal estimation filtering method, so that each frame of ultrasound is reconstructed according to the interpolation method. A three-dimensional image of the image. Through the above operations, it is possible to achieve the influence of the electromagnetic field and the ferromagnetic substance in the surrounding environment, and the ultrasonic probe is not blocked by the user or the inspector, and the interference to the user is smaller, and the problem of positioning due to the occlusion does not occur. The problem that the single positioning mode is difficult to obtain accurate pose estimation is effectively avoided, and the accuracy of the three-dimensional positioning is further improved by the feedback mechanism, thereby bringing greater freedom and higher positioning accuracy to the user end implementation. Thereby a better three-dimensional ultrasound image can be obtained.

Referring to FIG. 4, the embodiment provides a three-dimensional ultrasound imaging apparatus based on multi-sensor information fusion, including:

The sensor acquisition module 401 is configured to acquire the first pose information of the ultrasound probe by using the sensor group, wherein the first pose information is obtained by the sensor group according to the collected linear and angular acceleration information, wherein the sensor group is disposed on the ultrasound probe, The sensor group includes at least an inertial guidance sensor, and the camera acquisition module 402 is configured to acquire the second pose information of the ultrasound probe by using the camera, and the second pose information is obtained by the camera according to the collected panoramic scene, wherein the camera is set at On the ultrasound probe, the filtering module 403 is configured to obtain the pose information of each frame of the ultrasound image by using an optimal estimation filtering method, and the reconstruction module 404 is configured to reconstruct the three-dimensional image of each frame of the ultrasound image according to the interpolation method.

Optionally, in this embodiment, the multi-sensor information fusion based three-dimensional ultrasound imaging apparatus further includes: a similarity calculation module configured to calculate a similarity between the current time 3D image and the previous time 3D image, and the similarity comparison a module configured to generate correction information when the similarity is greater than a similar threshold set in advance, and the similarity determination module is configured to determine whether the similarity between the first pose information and the second pose information is smaller than a pose estimate corresponding to the correction information Value, negating the execution module, configured to determine whether the result is negative, reconstructing the three-dimensional image of each frame of the ultrasound image according to the interpolation method, and surely executing the module, configured to determine the result of the determination as the first pose position and the second pose The information is corrected for pose.

Optionally, in the embodiment, the camera acquisition module is specifically configured to:

The first information is collected by the first camera disposed on the ultrasonic probe, and the position information of the first camera corresponding to the first information is calculated by using a pose estimation algorithm, and the ultrasonic probe is obtained according to the position information. a set of position information and a first set of angle information, wherein the first set of position information includes position information of three degrees of freedom, and the first set of angle information includes angle information of three degrees of freedom;

The second information is collected by the second camera disposed on the back side of the ultrasonic probe, and the position information of the second camera corresponding to the second information is calculated by using the pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a second set of position information and a second set of angle information, wherein the second set of position information includes position information of three degrees of freedom, and the second set of angle information includes angle information of three degrees of freedom;

The third information is collected by the third camera disposed on the ultrasonic probe handle, and the position information of the third camera corresponding to the third information is calculated by using a pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a third set of position information and a third set of angle information, wherein the third set of position information includes position information of three degrees of freedom, and the third set of angle information includes angle information of three degrees of freedom;

The fourth information is collected by the fourth camera disposed on the back side of the ultrasonic probe handle, and the position information of the fourth camera corresponding to the fourth information is calculated by using the pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a fourth set of position information and a fourth set of angle information, wherein the fourth set of position information includes position information of three degrees of freedom, and the fourth set of angle information includes angle information of three degrees of freedom;

The first group of location information, the first group of angle information, the second group of location information, the second group of angle information, the third group of location information, the third group of angle information, The fourth set of position information and the fourth set of angle information are stitched into the second pose information.

Optionally, in the embodiment, the camera acquisition module is configured to calculate spatial position information of each camera by using a pose estimation algorithm according to information collected by each camera; and can perform ultrasound on the camera according to each camera. The relative position information on the probe and the spatial position information of each camera are calculated to obtain position information and angle information of the ultrasonic probe as the second pose information.

Optionally, in the embodiment, the pose estimation algorithm includes a synchronous positioning and map construction algorithm or a visual inertia mileage calculation method.

Optionally, the multi-sensor information fusion based three-dimensional ultrasound imaging apparatus in this embodiment further includes: a feedback module configured to feed the three-dimensional image into the locator of the ultrasound probe, and the optimization module is configured to perform the next frame of the ultrasound image The first pose information and the three-dimensional image are optimized.

The three-dimensional ultrasonic imaging apparatus based on multi-sensor information fusion provided by the embodiment of the present application has the same technical features as the three-dimensional ultrasonic imaging method based on multi-sensor information fusion provided by the above embodiments, so that the same technical problem can be solved and the same Technical effect.

The embodiment of the present application further provides a terminal, including a memory and a processor, the memory being configured to store a program supporting the processor to execute the method of the foregoing embodiment, the processor being configured to execute a program stored in the memory.

The embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, performs the steps of the method of any of the above.

It should be noted that each embodiment in the specification is described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same similar parts between the embodiments are referred to each other. can. The method and device for implementing the three-dimensional ultrasound imaging based on multi-sensor information fusion provided by the embodiments of the present application are the same as the foregoing method embodiments, and are not described in the device embodiment. Refer to the corresponding content in the foregoing method embodiments.

In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may also be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and functionality of possible implementations of apparatus, methods, and computer program products according to various embodiments of the present application. operating. In this regard, each block of the flowchart or block diagram can represent a module, a program segment, or a portion of code that includes one or more of the Executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may also occur in a different order than those illustrated in the drawings. For example, two consecutive blocks may be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts, can be implemented in a dedicated hardware-based system that performs the specified function or function. Or it can be implemented by a combination of dedicated hardware and computer instructions.

In addition, each functional module or unit in each embodiment of the present application may be integrated to form a separate part, or each module may exist separately, or two or more modules may be integrated to form a separate part.

The functions, if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including The instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

It should be noted that, in this context, relational terms such as first and second are used merely to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply such entities or operations. There is no such actual relationship or order between them, nor can it be understood as indicating or implying relative importance. Furthermore, the term "comprises" or "comprises" or "comprises" or any other variations thereof is intended to encompass a non-exclusive inclusion, such that a process, method, article, or device that comprises a plurality of elements includes not only those elements but also Other elements, or elements that are inherent to such a process, method, item, or device. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, method, item, or device that comprises the element.

The above description is only the preferred embodiment of the present application, and is not intended to limit the present application, and various changes and modifications may be made to the present application. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of this application are intended to be included within the scope of the present application. It should be noted that similar reference numerals and letters indicate similar items in the following figures, and therefore, once an item is defined in a drawing, it is not necessary to further define and explain it in the subsequent drawings.

The foregoing is only a specific embodiment of the present application, but the scope of protection of the present application is not limited thereto, and any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in the present application. It should be covered by the scope of protection of this application. Therefore, the scope of protection of the present application should be determined by the scope of the claims.

Industrial applicability

The multi-sensor information fusion based three-dimensional ultrasound imaging method, apparatus, and terminal machine-readable storage medium provided by the embodiments of the present application, wherein the multi-sensor information fusion-based three-dimensional ultrasound imaging method comprises: first, using an sensor group to acquire an ultrasound probe The first pose information needs to be described here. The sensor group is disposed on the ultrasonic probe. During the implementation, the sensor group includes at least an inertial guidance sensor to comprehensively consider the environment in which the ultrasonic probe is located through different angles. And obtaining the first pose information, and secondly, acquiring the second pose information of the ultrasonic probe by using the camera, and the second pose information is obtained by the camera according to the collected panoramic scene, and during the implementation, the camera is set in the ultrasound On the probe, after the optimal estimation filtering method is used to obtain the pose information of each frame of the ultrasonic image, that is, the ultrasonic image is obtained by optimally filtering the first pose information and the second pose information. Pose information, and finally, reconstruct each frame of the ultrasound image according to the interpolation method Image processing by the above method, effectively solve occlusion problem in the ultrasound imaging process will occur, and the limited range of the image instability problems, which largely improve the performance of the three-dimensional ultrasound imaging.

Claims (15)

  1. A three-dimensional ultrasound imaging method based on multi-sensor information fusion, comprising:
    Acquiring the first pose information of the ultrasonic probe by using the sensor group, wherein the first pose information is obtained by the sensor group according to the collected inertial information, wherein the sensor group is disposed on the ultrasonic probe, the sensor group At least two types of acceleration sensors and gyroscopes;
    Acquiring the second pose information of the ultrasound probe by using the camera, and the second pose information is obtained by the camera according to the collected panoramic scene, wherein the camera is disposed on the ultrasound probe, and the camera The number is at least two;
    Obtaining the pose information of each frame of the ultrasound image by using the optimal estimation filtering method;
    A three-dimensional image of each frame of the ultrasound image is reconstructed according to an interpolation method.
  2. The multi-sensor information fusion-based three-dimensional ultrasound imaging method according to claim 1, wherein the method for obtaining the pose information of each frame of the ultrasound image by using the method of optimal estimation filtering further comprises:
    Calculating a similarity between the three-dimensional image at the current moment and the three-dimensional image at a previous moment;
    Generating correction information when the similarity is greater than a similar threshold set in advance;
    Determining whether the similarity between the first pose information and the second pose information is smaller than a pose estimate corresponding to the correction information;
    When the judgment result is no, the three-dimensional image of each frame of the ultrasound image is reconstructed according to the interpolation method;
    When the determination result is YES, both the first pose information and the second pose information are subjected to pose correction.
  3. The multi-sensor information fusion based three-dimensional ultrasound imaging method according to claim 1 or 2, wherein the acquiring the second pose information of the ultrasound probe by using the camera comprises:
    The first information is collected by the first camera disposed on the ultrasonic probe, and the position information of the first camera corresponding to the first information is calculated by using a pose estimation algorithm, and the ultrasonic probe is obtained according to the position information. a set of position information and a first set of angle information, wherein the first set of position information includes position information of three degrees of freedom, and the first set of angle information includes angle information of three degrees of freedom;
    The second information is collected by the second camera disposed on the back side of the ultrasonic probe, and the position information of the second camera corresponding to the second information is calculated by using the pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a second set of position information and a second set of angle information, wherein the second set of position information includes position information of three degrees of freedom, and the second set of angle information includes angle information of three degrees of freedom;
    The third information is collected by the third camera disposed on the ultrasonic probe handle, and the position information of the third camera corresponding to the third information is calculated by using the pose estimation algorithm, and the ultrasonic probe is obtained according to the position information. a third set of position information and a third set of angle information, wherein the third set of position information includes position information of three degrees of freedom, and the third set of angle information includes angle information of three degrees of freedom;
    The fourth information is collected by the fourth camera disposed on the back side of the ultrasonic probe handle, and the position information of the fourth camera corresponding to the fourth information is calculated by using the pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a fourth set of position information and a fourth set of angle information, wherein the fourth set of position information includes position information of three degrees of freedom, and the fourth set of angle information includes angle information of three degrees of freedom;
    The first group of location information, the first group of angle information, the second group of location information, the second group of angle information, the third group of location information, the third group of angle information, The fourth set of position information and the fourth set of angle information are stitched into the second pose information.
  4. The multi-sensor information fusion based three-dimensional ultrasound imaging method according to claim 1 or 2, wherein the acquiring the second pose information of the ultrasound probe by using the camera comprises:
    Calculating spatial position information of each camera by using a pose estimation algorithm according to information collected by each camera;
    Obtaining position information and angle information of the ultrasonic probe as the second pose information according to relative position information of each camera on the ultrasonic probe and the spatial position information of each camera.
  5. The multi-sensor information fusion based three-dimensional ultrasound imaging method according to claim 3 or 4, wherein the pose estimation algorithm comprises a synchronous positioning and map construction algorithm or a visual inertia mileage calculation method.
  6. The multi-sensor information fusion-based three-dimensional ultrasound imaging method according to any one of claims 1-5, wherein after the reconstructing the three-dimensional image of each frame of the ultrasound image according to the interpolation method, the method further comprises:
    Feeding the three-dimensional image into a positioner of the ultrasound probe;
    The first pose information and the three-dimensional image of the next frame of the ultrasound image are optimized.
  7. The three-dimensional ultrasound imaging method based on multi-sensor information fusion according to any one of claims 1 to 6, wherein the inertial guidance sensor comprises a first inertial guidance chip and a second inertial guidance chip, and the An inertial guidance chip is disposed on the ultrasound probe, and the second inertial guidance chip is disposed on the back side of the ultrasound probe handle.
  8. A three-dimensional ultrasonic imaging apparatus based on multi-sensor information fusion, comprising:
    a sensor acquisition module configured to acquire first pose information of the ultrasound probe by using the sensor group, wherein the first pose information is obtained by the sensor group according to the collected panoramic scene, wherein the sensor group is set in the ultrasound On the probe, the sensor group includes at least an inertial guidance sensor;
    The camera acquisition module is configured to acquire the second pose information of the ultrasound probe by using the camera, and the second pose information is obtained by the camera according to the collected panoramic scene, wherein the camera is disposed on the ultrasound probe ;
    a filtering module configured to obtain pose information of each frame of the ultrasound image by using an optimal estimation filtering method;
    A reconstruction module configured to reconstruct a three-dimensional image of each frame of the ultrasound image according to an interpolation method.
  9. The apparatus of claim 8, wherein the method further comprises:
    a similarity calculation module configured to calculate a similarity between the three-dimensional image at a current moment and the three-dimensional image at a previous moment;
    a similarity comparison module configured to generate correction information when the similarity is greater than a similar threshold set in advance;
    The similarity determining module is configured to determine whether the similarity between the first pose information and the second pose information is smaller than a pose estimate corresponding to the correction information;
    Negating the execution module, configured to determine whether the result is negative, reconstructing a three-dimensional image of each frame of the ultrasound image according to the interpolation method;
    The affirmative execution module is configured to perform pose correction on both the first pose information and the second pose information when the determination result is YES.
  10. The multi-sensor information fusion-based three-dimensional ultrasound imaging apparatus according to claim 8 or 9, wherein the camera acquisition module is specifically configured to:
    The first information is collected by the first camera disposed on the ultrasonic probe, and the position information of the first camera corresponding to the first information is calculated by using a pose estimation algorithm, and the ultrasonic probe is obtained according to the position information. a set of position information and a first set of angle information, wherein the first set of position information includes position information of three degrees of freedom, and the first set of angle information includes angle information of three degrees of freedom;
    The second information is collected by the second camera disposed on the back side of the ultrasonic probe, and the position information of the second camera corresponding to the second information is calculated by using the pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a second set of position information and a second set of angle information, wherein the second set of position information includes position information of three degrees of freedom, and the second set of angle information includes angle information of three degrees of freedom;
    The third information is collected by the third camera disposed on the ultrasonic probe handle, and the position information of the third camera corresponding to the third information is calculated by using a pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a third set of position information and a third set of angle information, wherein the third set of position information includes position information of three degrees of freedom, and the third set of angle information includes angle information of three degrees of freedom;
    The fourth information is collected by the fourth camera disposed on the back side of the ultrasonic probe handle, and the position information of the fourth camera corresponding to the fourth information is calculated by using the pose estimation algorithm, and the ultrasonic probe is acquired according to the position information. a fourth set of position information and a fourth set of angle information, wherein the fourth set of position information includes position information of three degrees of freedom, and the fourth set of angle information includes angle information of three degrees of freedom;
    The first group of location information, the first group of angle information, the second group of location information, the second group of angle information, the third group of location information, the third group of angle information, The fourth set of position information and the fourth set of angle information are stitched into the second pose information.
  11. The multi-sensor information fusion-based three-dimensional ultrasound imaging apparatus according to claim 8 or 9, wherein the camera acquisition module is configured to calculate the camera positions by using a pose estimation algorithm according to information collected by each camera. Spatial position information; and can obtain position information and angle information of the ultrasonic probe as the second pose information according to relative position information of each camera on the ultrasonic probe and the spatial position information of each camera .
  12. The multi-sensor information fusion based three-dimensional ultrasound imaging apparatus according to claim 10 or 11, wherein the pose estimation algorithm comprises a synchronous positioning and map construction algorithm or a visual inertia mileage calculation method.
  13. The multi-sensor information fusion based three-dimensional ultrasound imaging apparatus according to any one of claims 8-12, further comprising:
    a feedback module configured to feed the three-dimensional image into a positioner of the ultrasound probe;
    And an optimization module configured to optimize the first pose information and the three-dimensional image of the next frame of the ultrasound image.
  14. A terminal, comprising: a memory and a processor, the memory being configured to store a program supporting a processor to perform the method of any one of claims 1 to 7, the processor being configured to be configured to perform the A program stored in memory.
  15. A computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to perform the steps of the method of any one of claims 1 to 7.
PCT/CN2019/078034 2018-03-20 2019-03-13 Multi-sensor information fusion-based three-dimensional ultrasound imaging method, device and terminal machine-readable storage medium WO2019179344A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810232253.X 2018-03-20
CN201810232253.XA CN108403146B (en) 2018-03-20 Three-dimensional ultrasonic imaging method and device based on multi-sensor information fusion

Publications (1)

Publication Number Publication Date
WO2019179344A1 true WO2019179344A1 (en) 2019-09-26

Family

ID=63132850

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078034 WO2019179344A1 (en) 2018-03-20 2019-03-13 Multi-sensor information fusion-based three-dimensional ultrasound imaging method, device and terminal machine-readable storage medium

Country Status (1)

Country Link
WO (1) WO2019179344A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1689518A (en) * 2004-04-21 2005-11-02 西门子共同研究公司 Method for augmented reality instrument placement using an image based navigation system
CN106056664A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
WO2016176452A1 (en) * 2015-04-28 2016-11-03 Qualcomm Incorporated In-device fusion of optical and inertial positional tracking of ultrasound probes
WO2018002004A1 (en) * 2016-06-30 2018-01-04 Koninklijke Philips N.V. Intertial device tracking system and method of operation thereof
CN107802346A (en) * 2017-10-11 2018-03-16 成都漫程科技有限公司 A kind of ultrasound fusion navigation system and method based on inertial guidance
CN108403146A (en) * 2018-03-20 2018-08-17 余夏夏 Based on 3-D supersonic imaging method and device combined of multi-sensor information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1689518A (en) * 2004-04-21 2005-11-02 西门子共同研究公司 Method for augmented reality instrument placement using an image based navigation system
WO2016176452A1 (en) * 2015-04-28 2016-11-03 Qualcomm Incorporated In-device fusion of optical and inertial positional tracking of ultrasound probes
CN106056664A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
WO2018002004A1 (en) * 2016-06-30 2018-01-04 Koninklijke Philips N.V. Intertial device tracking system and method of operation thereof
CN107802346A (en) * 2017-10-11 2018-03-16 成都漫程科技有限公司 A kind of ultrasound fusion navigation system and method based on inertial guidance
CN108403146A (en) * 2018-03-20 2018-08-17 余夏夏 Based on 3-D supersonic imaging method and device combined of multi-sensor information

Also Published As

Publication number Publication date
CN108403146A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
US10674936B2 (en) Hybrid registration method
US20200096317A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
JP6344824B2 (en) Motion compensation in 3D scanning
CN105658167B (en) Computer for being determined to the coordinate conversion for surgical navigational realizes technology
JP5858636B2 (en) Image processing apparatus, processing method thereof, and program
JP5367215B2 (en) Synchronization of ultrasound imaging data with electrical mapping
US9561019B2 (en) Methods and systems for tracking and guiding sensors and instruments
JP5624394B2 (en) Position / orientation measurement apparatus, measurement processing method thereof, and program
US6049622A (en) Graphic navigational guides for accurate image orientation and navigation
EP2185077B1 (en) Ultrasonic diagnostic imaging system and control method thereof
JP4438053B2 (en) Radiation imaging apparatus, image processing method, and computer program
US20150379766A1 (en) Generation of 3d models of an environment
US7085400B1 (en) System and method for image based sensor calibration
JP4594610B2 (en) Ultrasonic image processing apparatus and ultrasonic diagnostic apparatus
EP2132523B1 (en) Method and device for exact measurement of objects
KR101140525B1 (en) Method and apparatus for extending an ultrasound image field of view
US7033320B2 (en) Extended volume ultrasound data acquisition
US7867167B2 (en) Ultrasound calibration and real-time quality assurance based on closed form formulation
US10068344B2 (en) Method and system for 3D capture based on structure from motion with simplified pose detection
JP6242569B2 (en) Medical image display apparatus and X-ray diagnostic apparatus
CN100496407C (en) Ultrasonic diagnosis apparatus
EP1623674B1 (en) Reference image display method for ultrasonography and ultrasonograph
US6978040B2 (en) Optical recovery of radiographic geometry
US8350897B2 (en) Image processing method and image processing apparatus
US7587295B2 (en) Image processing device and method therefor and program codes, storing medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19771978

Country of ref document: EP

Kind code of ref document: A1