CN116206061A - Coordinate calibration method and electronic equipment - Google Patents

Coordinate calibration method and electronic equipment Download PDF

Info

Publication number
CN116206061A
CN116206061A CN202310209076.4A CN202310209076A CN116206061A CN 116206061 A CN116206061 A CN 116206061A CN 202310209076 A CN202310209076 A CN 202310209076A CN 116206061 A CN116206061 A CN 116206061A
Authority
CN
China
Prior art keywords
matrix
image
calibration
ultrasonic
ultrasound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310209076.4A
Other languages
Chinese (zh)
Inventor
谢卫国
欧阳挺
陈卓
张旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weide Precision Medical Technology Co ltd
Original Assignee
Shenzhen Weide Precision Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weide Precision Medical Technology Co ltd filed Critical Shenzhen Weide Precision Medical Technology Co ltd
Priority to CN202310209076.4A priority Critical patent/CN116206061A/en
Publication of CN116206061A publication Critical patent/CN116206061A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Abstract

The application provides a coordinate calibration method and electronic equipment. By implementing the method, when the electronic equipment acquires the space coordinates of the ultrasonic image in the world coordinate system, the acquired space coordinates are more accurate. Therefore, when the electronic equipment carries out three-dimensional modeling on the target object based on the ultrasonic image, the accuracy of the obtained model is improved, and medical staff is further helped to diagnose the illness state of the patient better.

Description

Coordinate calibration method and electronic equipment
Technical Field
The application relates to the technical field of medical treatment, in particular to a coordinate calibration method and electronic equipment.
Background
The medical image three-dimensional reconstruction is a interdisciplinary research field, and human tissues represented by two-dimensional slices are displayed in a three-dimensional mode by utilizing a computer technology and a graphic imaging technology according to a two-dimensional slice sequence obtained clinically, a doctor can observe tissues of a patient in multiple angles and multiple layers according to a reconstructed model, and even the size, the position, the shape and the like of a focus can be accurately and intuitively recognized, so that accurate diagnosis is made.
At present, the clinic often scans the focus of a patient by using ultrasonic equipment to obtain two-dimensional images for modeling, and the accuracy of coordinates of the two-dimensional images under a surgical navigation system is closely related to the accuracy of a model obtained by 3D reconstruction. However, due to the processing errors or the installation errors of the positioning device on the ultrasonic probe, the processing errors or the installation errors of each time of the bracket can cause obvious errors in the ultrasonic image coordinate matrix converted according to the bracket and the design drawing, and the subsequent ultrasonic 3D reconstruction can be seriously affected.
Therefore, a method of calibrating the spatial coordinates of an ultrasound image needs to be studied to improve the accuracy of the model obtained by 3D reconstruction.
Disclosure of Invention
The patent refers to the field of 'electric digital data processing'. By implementing the method, when the electronic equipment acquires the space coordinates of the ultrasonic image in the world coordinate system, the acquired space coordinates are more accurate. Therefore, when the electronic equipment carries out three-dimensional modeling on the target object based on the ultrasonic image, the accuracy of the obtained model is improved, and medical staff is further helped to diagnose the illness state of the patient better.
The above and other objects are achieved by the features of the independent claims. Further implementations are presented in the dependent claims, the description and the figures.
In a first aspect, the present application provides a coordinate calibration method, including: determining a calibration matrix according to a first matrix and a second matrix, wherein the first matrix and the second matrix both represent the spatial positions of image sections of ultrasonic equipment in a world coordinate system, the first matrix is determined based on the ultrasonic calibration equipment, and the second matrix is determined based on the optical tracking equipment and the ultrasonic equipment; and calibrating the space coordinates of a target image by using the calibration matrix, wherein the target image is an image obtained by scanning the ultrasonic equipment.
In the present method, the first matrix is determined based on an ultrasound calibration apparatus, and the second matrix is determined based on a positioning device on the ultrasound apparatus (e.g., an optical tracking sphere on an ultrasound probe). It should be understood that the first matrix characterizes the real coordinates of the image section of the ultrasound device in the world coordinate system, which are error-free (or very small, negligible); the second matrix is determined on the basis of the positioning means, which, due to machining errors or installation errors of each time of such positioning means, although it also characterizes the coordinates of the image section of the ultrasound device in the world coordinate system, is error-prone to the real coordinates of the image section of the ultrasound device in the world coordinate system. That is, in an ideal case, the first matrix and the second matrix are identical, but the second matrix is likely to be different from the first matrix due to a machining error of the positioning device or an installation error each time.
However, in the actual modeling process, the ultrasonic device needs to scan the target object for multiple times to obtain a multi-frame image for modeling, and due to various reasons such as the volume, the weight, the use method of the ultrasonic correction device, the coordinates of the image scanned by the ultrasonic device cannot be determined in real time based on the ultrasonic correction device in the scanning process, but can only be determined by the optical intelligent device and the positioning device on the ultrasonic device. Based on the foregoing description, it is known that determining the spatial coordinates of the image based on the optical intelligent device and the positioning device on the ultrasound device is not accurate enough, which may further affect the subsequent ultrasound 3D reconstruction process, resulting in a reduced accuracy of the obtained model.
Therefore, in the method, the calibration matrix is determined based on the first matrix and the second matrix, and the spatial coordinates of the image (including the target image) obtained by the subsequent scanning of the ultrasonic equipment are calibrated based on the calibration matrix, so that the authenticity and the accuracy of the spatial coordinates of the image obtained by the subsequent scanning can be ensured, and when the target object is subjected to three-dimensional modeling based on the ultrasonic image, the accuracy of the obtained model is improved, and medical staff can be helped to diagnose the illness state of a patient better.
With reference to the first aspect, in a possible implementation manner, the ultrasound calibration apparatus is provided with a wiring area, and before the determining the calibration matrix according to the first matrix and the second matrix, the method further includes: acquiring a first image, wherein the first image is an image obtained by scanning the wiring area by the ultrasonic equipment; determining a third matrix according to imaging positions of lines in the wiring area in the first image, wherein the third matrix represents the spatial position of an image section of the ultrasonic device in the ultrasonic calibration device; and determining the first matrix according to the third matrix and a fourth matrix, wherein the fourth matrix represents the spatial position of the ultrasonic calibration equipment in a world coordinate system.
In this embodiment, in order to accurately determine the real position of the image section of the ultrasonic device in the world coordinate system, the first image is obtained by scanning the wiring area in the ultrasonic calibration device using the ultrasonic device. It will be appreciated that the lines in the routing area are arranged in a position and configuration and that the image cross section of the ultrasound calibration apparatus and the lines of the routing area are intersected in three dimensions, which intersection points are represented in the first image. Generally, a plane can be defined at three points in space. Therefore, in the present embodiment, after the first image is obtained, the spatial positional relationship of the image section of the ultrasound image, that is, the third matrix, may be determined from the imaging points of the lines in the wiring region in the first image. It is to be understood that the spatial positional relationship referred to herein characterizes the spatial position of the image section of the ultrasound device relative to the ultrasound calibration device, or what position the image section of the ultrasound device is to be within the ultrasound calibration device, rather than the spatial position of the image section of the ultrasound device within the world coordinate system. Therefore, in order to obtain the position of the image section of the ultrasonic device in the space coordinate system, the ultrasonic calibration device may be further provided with a positioning device, and the position of the ultrasonic calibration device in the world coordinate system, that is, the fourth matrix, may be obtained based on the optical tracking device and the positioning device on the ultrasonic calibration device. And then, converting the third matrix and the fourth matrix to obtain the real space position of the image section of the ultrasonic equipment under the world coordinate system, namely the first matrix.
With reference to the first aspect, in a possible implementation manner, the calibrating spatial coordinates of the image scanned by the ultrasound device using the calibration matrix includes: determining differences between elements in the calibration matrix and corresponding elements in the same-order identity matrix; and under the condition that the difference value between each element in the calibration matrix and the corresponding element in the same-order identity matrix is larger than a first threshold value, calibrating the space coordinates of the image scanned by the ultrasonic equipment by using the calibration matrix.
It should be understood that, although there is a difference between the first matrix and the second matrix, when there is a small difference between the two, that is, a small difference between the spatial position of the image section of the ultrasound apparatus and the real spatial position of the image section determined based on the positioning device of the ultrasound apparatus, the accuracy of the resulting model is not substantially affected even if the spatial coordinates of the image are calibrated without using the calibration matrix. In contrast, if the calibration matrix is still used to calibrate the spatial coordinates of the ultrasound image used for subsequent modeling, the processing load of the device is certainly increased, and the modeling efficiency is reduced.
Therefore, in this embodiment, when the difference between each element in the calibration matrix and the corresponding element in the same-order identity matrix is greater than the first threshold (the difference between the spatial position of the image section of the ultrasound apparatus and the real spatial position of the image section determined based on the positioning device of the ultrasound apparatus is sufficiently large), the calibration matrix is used to calibrate the spatial coordinates of the image scanned by the ultrasound apparatus. Otherwise, in the subsequent modeling process, the image coordinates determined by the optical tracking device and the positioning device on the ultrasonic device can be directly utilized for three-dimensional modeling, and calibration of the image coordinates is not needed. Therefore, equipment resources can be saved, and modeling speed is improved.
Specifically, the first threshold may be set to 0.001, or may be set to another value, which is not limited in this application.
With reference to the first aspect, in one possible implementation manner, the wiring area is provided with a first line group, a second line group and a third line group, where the first line group, the second line group and the third line group are all arranged in an "N" structure, and the first line group, the second line group and the third line group respectively correspond to three light spots in the first image, and determining the third matrix according to imaging positions of lines in the wiring area in the first image includes: determining a first coordinate point, a second coordinate point and a third coordinate point based on three light spots respectively corresponding to the first line group, the second line group and the third line group in the first image; and determining the coordinate matrixes in the planes of the first coordinate point, the second coordinate point and the third coordinate point as the third matrix.
In this embodiment, since each line group of the "N" structure arrangement includes three lines, each line group of the "N" structure arrangement forms three intersecting points with the image interface of the ultrasonic device, where two strokes on the left and right of the letter "N" are regarded as two parallel line segments (the second stroke on the right of the first stroke on the left), and the third stroke is connected from the upper end point of the first stroke to the lower end point of the second stroke, it can be found that the point on the third stroke is farther from the first stroke and closer to the second stroke when the letter "N" is viewed from above. And combining the three light spots in the first image of each line group arranged in the N-shaped structure, and determining the specific position of the second light spot in the second stroke (namely, the line which is not parallel to the other two lines in the line group arranged in the N-shaped structure) in the three light spots according to the distance between each light spot and other light spots.
Generally, a plane can be defined at three points in space. Therefore, at least three position points can be determined in the ultrasonic calibration equipment according to the three line groups arranged in the N-shaped structure, and a plane can be determined by the three position points, and the space position coordinates corresponding to the plane are the third matrix.
With reference to the first aspect, in a possible implementation manner, the target image is included in a multi-frame image obtained by scanning the target object by the ultrasound device, and after the calibrating spatial coordinates of the target image by using the calibration matrix, the method further includes: performing pixel space mapping and interpolation processing on each frame of image in the multi-frame images to obtain three-dimensional data representing the spatial physical characteristics of the target object, wherein the spatial coordinates of any frame of image in the multi-frame images are obtained based on the calibration matrix; modeling is carried out based on the three-dimensional data, and a three-dimensional model of the target object is obtained.
It should be understood that, in order to obtain the three-dimensional model of the target object, the two-dimensional coordinates (x, y) of the multi-frame image need to be transferred to the three-dimensional (x, y, z) coordinates during the process of obtaining the three-dimensional model from the two-dimensional image. It can be understood that when two dimensions are transferred to a three-dimensional space, the point cloud (i.e. the three-dimensional data) obtained based on the multi-frame images is sparse, and then interpolation filling is needed to be performed on the three-dimensional data in the three-dimensional space, so that blurring is avoided. Specifically, the interpolation padding algorithm may be weighted square distance interpolation, bezier interpolation, or the like, or may be other difference algorithm, which is not limited in this application.
With reference to the first aspect, in one possible implementation manner, the target image is an ultrasound image of a continuous frame in a video stream, and the video stream is a video stream obtained by scanning, by the ultrasound device, the target object in any direction where the target object is located.
In this embodiment, the ultrasound apparatus may capture the video stream in any direction of the target object. In modeling with ultrasound images of the successive frames, the ultrasound images of the successive frames may include images taken at multiple orientations of the target object. Therefore, when the three-dimensional modeling is carried out on the target object, the model reconstruction in a larger range and the flexible model splicing can be realized, and the obtained model is more accurate.
In a second aspect, the present application provides a coordinate calibration apparatus comprising: the device comprises a determining unit, a calibration unit and a control unit, wherein the determining unit is used for determining a calibration matrix according to a first matrix and a second matrix, the first matrix and the second matrix are used for representing the spatial position of the current image section of the ultrasonic device in a world coordinate system, the first matrix is determined based on the ultrasonic calibration device, and the second matrix is determined based on the optical tracking device and the ultrasonic device; and the calibration unit is used for calibrating the space coordinates of a target image by using the calibration matrix, wherein the target image is an image obtained by scanning the ultrasonic equipment.
With reference to the second aspect, in a possible implementation manner, the ultrasonic calibration apparatus is provided with a wiring area, and the device further includes: an acquisition unit configured to acquire a first image, the first image being an image obtained by scanning the wiring area by the ultrasonic apparatus; a positioning unit, configured to determine a third matrix according to an imaging position of each line in the wiring area in the first image, where the third matrix characterizes a spatial position of an image section of the ultrasound device in an ultrasound calibration device; the determining unit is further configured to determine the first matrix according to the third matrix and a fourth matrix, where the fourth matrix characterizes a spatial position of the ultrasound calibration apparatus in a world coordinate system.
With reference to the second aspect, in a possible implementation manner, the calibration unit is specifically configured to: determining differences between elements in the calibration matrix and corresponding elements in the same-order identity matrix; and under the condition that the difference value between each element in the calibration matrix and the corresponding element in the same-order identity matrix is larger than a first threshold value, calibrating the space coordinates of the image scanned by the ultrasonic equipment by using the calibration matrix.
With reference to the second aspect, in one possible implementation manner, the wiring area is provided with a first line group, a second line group and a third line group, where the first line group, the second line group and the third line group are all arranged in an N-shaped structure, and the first line group, the second line group and the third line group respectively correspond to three light spots in the first image, and the positioning unit is specifically configured to: determining a first coordinate point, a second coordinate point and a third coordinate point based on three light spots respectively corresponding to the first line group, the second line group and the third line group in the first image; and determining the coordinate matrixes in the planes of the first coordinate point, the second coordinate point and the third coordinate point as the third matrix.
With reference to the second aspect, in a possible implementation manner, the target image is included in a multi-frame image obtained by scanning the target object by the ultrasound device, and the apparatus further includes: the conversion unit is used for carrying out pixel space mapping and interpolation processing on each frame of image in the multi-frame images to obtain three-dimensional data representing the spatial physical characteristics of the target object, and the spatial coordinates of any frame of image in the multi-frame images are obtained based on the calibration matrix; and the modeling unit is used for modeling based on the three-dimensional data to obtain a three-dimensional model of the target object.
With reference to the second aspect, in one possible implementation manner, the target image is an ultrasound image of a continuous frame in a video stream, and the video stream is a video stream obtained by scanning, by the ultrasound device, the target object in any direction where the target object is located.
In a third aspect, the present application provides a coordinate calibration system, the system comprising an ultrasound device, an ultrasound calibration device, an optical tracking device, and a calibration device, the ultrasound calibration device being provided with a first positioning means and a wiring area, the ultrasound device being provided with a second positioning means, wherein: the ultrasonic equipment is used for scanning the wiring area in the ultrasonic calibration equipment to obtain a first image; the optical tracking equipment is used for shooting the first positioning device to obtain first position information, and shooting the second positioning device to obtain second position information; the calibration device is configured to determine a first matrix based on the first location information and the first image, and to determine a second matrix based on the second location information; the calibration device is further used for determining a transformation matrix based on the first matrix and the second matrix, and calibrating the space coordinates of the target image scanned by the ultrasonic device by using the calibration matrix.
In a fourth aspect, the present application provides an electronic device, including: one or more processors and memory; the memory is coupled with the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors invoke the computer instructions to cause the electronic device to perform the method of the first aspect or any of the possible implementations of the first aspect.
In a fifth aspect, the present application provides a chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform a method as in the first aspect or as in any possible implementation of the first aspect.
In a sixth aspect, the present application provides a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as in the first aspect or any possible implementation of the first aspect.
Drawings
Fig. 1 is a schematic plan view of an installation position of a positioning device on an ultrasonic probe according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for calibrating coordinates according to an embodiment of the present application;
fig. 3 is a schematic diagram of determining coordinates of an image section of an ultrasonic device according to an embodiment of the present application;
fig. 4 is a physical diagram of a coordinate calibration system according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a coordinate calibration device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this application refers to and encompasses any or all possible combinations of one or more of the listed items.
Since the embodiments of the present application relate to coordinate calibration methods and 3D ultrasound reconstruction techniques, for ease of understanding, the following description will first refer to related terms.
(1) Image coordinates and three-dimensional reconstruction
The image coordinates refer to the position of the cross section in which the image is located in a certain coordinate system. In general, the coordinates of the plane in which the image lies in three dimensions can be determined based on a fourth-order matrix, which can be expressed as:
Figure BDA0004112023560000051
wherein (x, y, z) represents any point in the plane of the image and (a, b, c) represents the normal vector of the plane of the image.
When a camera images, the conversion from a point (Xw, yw, zw) to a pixel (u, v) in three-dimensional space under a world coordinate system needs to undergo a series of transformations of the world coordinate system- > the camera coordinate system- > the image coordinate system, wherein:
the image coordinate system is a coordinate system established in the image plane with reference to the two-dimensional image. According to the different units, the system can be divided into pixel coordinates (unit: pixel number) and physical size coordinates (unit: mm), and the origin of coordinates is the intersection point position of the optical axis of the camera and the physical coordinate system of the image.
Camera coordinate system the camera stands on the coordinate system of the object measured in its own angle. The origin of the camera coordinate system is on the camera's optical axis and the z-axis is parallel to the camera's optical axis. The bridge head fort is connected with a shooting object, the object under the world coordinate system needs to go through rigid body change to be transferred to the camera coordinate system, and then the object is connected with the image coordinate system.
The world coordinate system is a reference coordinate system that describes the introduction of the position of the object in the real world. The world coordinate system is a reference system of the actual object position, and the conversion relation between the world coordinate system and the camera coordinate system is a rigid body transformation.
The three-dimensional reconstruction is to recover three-dimensional information by using two-dimensional information, and the two-dimensional information used by the passive vision method is an image sequence. To change from two dimensions to three dimensions, the coordinates of the two-dimensional image used are also transformed from two-dimensional coordinates to three-dimensional coordinates. In general, images obtained by photographing a target object using a visual device such as a camera or an ultrasonic device are two-dimensional images, and the positions of pixels in the images in an image coordinate system can be determined based on the two-dimensional images. This also determines that when three-dimensional reconstruction is performed based on a two-dimensional image, the obtained two-dimensional data needs to be sequentially converted into three-dimensional data for 3D modeling through a coordinate conversion process of image coordinates (u, v) (or imaging plane/image physical coordinates (x, y)), camera coordinates (Xc, yc, zc), world coordinates (Xw, yw, zw).
(2) Northern digital corporation (northern digital inc, NDI) optical tracking device
NDI is a supplier of optical and electromagnetic measurement tracking solutions and was created in 1981 by graduates at the university of slimmer. At the beginning of establishment, NDI optical tracking devices are optical measurement systems or optical tracking systems for human motion tracking. The system tracks small infrared markers attached to the object based on specialized sensor technology and proprietary optical designs, allowing real-time high-precision 3D and 6DOF measurements. The infrared markers may also be attached to the hand-held OEM surgical instrument and track the robot, surgical instrument, and patient in real time, most in the same coordinate system. The location and orientation of each tracked object is uniquely identified by their different signature configurations and can be located and visualized on the fly in the OEM host interface.
The medical image three-dimensional reconstruction is a interdisciplinary research field, and human tissues represented by two-dimensional slices are displayed in a three-dimensional mode by utilizing a computer technology and a graphic imaging technology according to a two-dimensional slice sequence obtained clinically, a doctor can observe tissues of a patient in multiple angles and multiple layers according to a reconstructed model, and even the size, the position, the shape and the like of a focus can be accurately and intuitively recognized, so that accurate diagnosis is made.
At present, the clinic often scans the focus of a patient by using ultrasonic equipment to obtain two-dimensional images for modeling, and the accuracy of coordinates of the two-dimensional images under a surgical navigation system is closely related to the accuracy of a model obtained by 3D reconstruction. In a conventional three-dimensional modeling process, the spatial position of an image section of an ultrasound device is typically determined by a positioning means on the ultrasound device (e.g., an optical tracking ball on an ultrasound probe) and an optical tracking device. As shown in fig. 1 (a), the ultrasonic probe 10 is a part of an ultrasonic device, on which an optical positioning device that facilitates observation and tracking by an optical tracking device, that is, an optical positioning device 11 shown in fig. 1 (a), is provided, and optical tracking balls 111 are disposed at ports of the optical positioning device 11.
It should be understood that in determining the spatial position of the image section of the ultrasound apparatus based on the optical positioning device, the optical tracking apparatus first determines the spatial position of the image section based on the spatial position of the optical positioning device 11 in combination with the relative position of the optical positioning device and the ultrasound apparatus (the ultrasound probe 10). Therefore, in order to accurately obtain the spatial position of the image section of the ultrasonic apparatus, it is necessary to strictly control the relative positional relationship between the optical positioning device 11 and the ultrasonic probe 10, which is generally already specified in the product development and production stages of the optical positioning device 11 and the ultrasonic probe 10.
However, since the optical positioning device on the ultrasonic apparatus needs to be cleaned and replaced, the optical positioning device on the ultrasonic apparatus needs to be frequently disassembled and assembled, which is likely to cause a deviation of the mounting position of the positioning device on the ultrasonic probe from a prescribed mounting position. As shown in fig. 1 (B), if the optical positioning device 13 is accurately mounted on the ultrasound probe 12, the optical positioning device 13 should be located in a region 13' indicated by a dotted line shown in fig. 1 (B), and one of the holders in the optical positioning device should be coincident with (or parallel to) the central axis 14 of the ultrasound probe 12. However, due to errors in the installation, the relative position of the optical positioning device 13 with respect to the ultrasound probe 12 is deviated, and when the optical tracking device is used based on the coordinates of the image section of the ultrasound device determined by the optical positioning device 13, there is a high possibility that there is also an error between the measured coordinates of the image section and the actual coordinates of the image section, which may further affect the accuracy of the 3D model obtained by the subsequent ultrasound 3D reconstruction, which may definitely affect the accurate judgment of the patient condition by the medical staff, and even may jeopardize the life health of the patient in serious cases.
In view of the above drawbacks, the present application provides a coordinate calibration method. According to the method, an actual space coordinate matrix of an image section of the ultrasonic equipment is determined through the ultrasonic calibration equipment and the optical positioning equipment, a theoretical space coordinate matrix of the image section of the ultrasonic equipment is determined based on the optical positioning equipment and a positioning device on the ultrasonic equipment, and then the two coordinate matrices determine the calibration matrix. In the subsequent ultrasound scanning process, although the spatial coordinate matrix of the obtained ultrasound image is determined based on the optical positioning device and the positioning device on the ultrasound device, the calibration matrix may be used to calibrate the coordinate matrix of each frame of ultrasound image, so as to obtain an accurate spatial coordinate matrix of each frame of ultrasound image. By implementing the method, when the electronic equipment acquires the space coordinates of the ultrasonic image, the acquired space coordinates are more accurate. In this way, when the electronic device performs three-dimensional modeling on the target object based on the ultrasonic image, the accuracy of the obtained model is improved, so as to help medical staff to diagnose the illness state of the patient better, and refer to fig. 2 specifically.
Fig. 2 is a flowchart of a coordinate calibration method according to an embodiment of the present application. As shown in fig. 2, the method may include, but is not limited to, the steps of:
201. The calibration device determines a calibration matrix from the first matrix and the second matrix.
The calibration device may be a mobile phone (mobile phone), a tablet computer (pad), a computer with a data transceiver function (such as a notebook computer, a palm computer, etc.), a mobile internet device (mobile internet device, MID), a terminal in an industrial control (industrial control), a terminal in a medical system, a terminal device in a 5G network, etc.; it is understood that the present application is not limited to the specific form of the electronic device described above.
The first matrix and the second matrix each characterize a spatial position of an image section of the ultrasound device within a world coordinate system, which may be a world coordinate system determined in a coordinate system of NDI.
The first matrix is determined based on an ultrasound calibration apparatus and the second matrix is determined based on a positioning device on the ultrasound apparatus, such as an optical tracking sphere on an ultrasound probe. It should be understood that the first matrix characterizes the real coordinates of the image section of the ultrasound device in the world coordinate system, which are error-free (or very small, negligible); the second matrix is determined based on a positioning device (hereinafter referred to as a second positioning device) on the ultrasonic apparatus, and is error-prone to the real coordinates of the image section of the ultrasonic apparatus in the world coordinate system, although the second matrix also characterizes the coordinates of the image section of the ultrasonic apparatus in the world coordinate system, due to the machining error or each installation error of such a positioning device. That is, in an ideal case, the first matrix and the second matrix are identical, but the second matrix is likely to be different from the first matrix due to a machining error of the positioning device or an installation error each time.
However, in the actual modeling process, the ultrasonic device needs to scan the target object for multiple times to obtain a multi-frame image for modeling, and due to various reasons such as the volume, the weight, the use method of the ultrasonic correction device, the coordinates of the image scanned by the ultrasonic device cannot be determined in real time based on the ultrasonic correction device in the scanning process, but can only be determined by the optical intelligent device and the positioning device on the ultrasonic device. Based on the foregoing description, it is known that determining the spatial coordinates of the image based on the optical intelligent device and the positioning device on the ultrasound device is not accurate enough, which may further affect the subsequent ultrasound 3D reconstruction process, resulting in a reduced accuracy of the obtained model. Thus, after the first matrix and the second matrix are obtained, the calibration device may perform spatial rigid registration according to the two matrices to obtain the calibration matrix. Specifically, assuming that the first matrix is a, the second matrix is B, and the calibration matrix is C, the calibration matrix C may be obtained according to the following relation:
C*B=A;
where "×" denotes the product operation of the matrix.
In an alternative embodiment, the ultrasound calibration apparatus is provided with a wiring area. In order to determine the first matrix, the wiring area of the ultrasonic calibration device may be scanned based on the ultrasonic device to obtain a first image, and a third matrix may be determined according to the imaging position of each line in the wiring area in the first image. And then, determining the first matrix according to the third matrix and the fourth matrix. The fourth matrix characterizes a spatial position of the ultrasound calibration apparatus within a world coordinate system. The third matrix represents the spatial position of the image section of the ultrasonic device in the ultrasonic calibration device, and the fourth matrix represents the spatial position of the ultrasonic calibration device in the world coordinate system; specifically, the ultrasound calibration apparatus may be provided with an optical positioning device (hereinafter referred to as a first positioning device) for tracking by the optical tracking apparatus, and the fourth matrix may be determined based on the optical tracking apparatus and the first positioning device.
It should be understood that the lines in the wiring area are arranged according to a certain position and structure, and that the image section of the ultrasonic calibration apparatus and the lines of the wiring area have intersection points in a three-dimensional space, and that the intersection points are displayed in the form of light spots in the first image when the wiring area in the ultrasonic calibration apparatus is scanned by using the ultrasonic apparatus. Generally, a plane can be defined at three points in space. Therefore, in the present embodiment, after the first image is obtained, the spatial positional relationship of the image cross section of the ultrasound image, that is, the third matrix, may be determined based on the imaging points of the lines in the wiring region in the first image. It should be understood that the spatial positional relationship described herein characterizes the spatial position of the image section of the ultrasound apparatus with respect to the ultrasound calibration apparatus, or what position the image section of the ultrasound apparatus is to be within the ultrasound calibration apparatus, rather than the spatial position of the image section of the ultrasound apparatus within the world coordinate system. Therefore, in order to obtain the position of the image section of the ultrasonic device in the space coordinate system, the ultrasonic calibration device may be further provided with a positioning device, and the position of the ultrasonic calibration device in the world coordinate system, that is, the fourth matrix, may be obtained based on the optical tracking device and the positioning device on the ultrasonic calibration device. And then, converting the third matrix and the fourth matrix to obtain the real space position of the image section of the ultrasonic equipment under the world coordinate system, namely the first matrix.
Specifically, a first line set, a second line set and a third line set are arranged in the wiring area, the first line set, the second line set and the third line set are all in an N-shaped structure, and the first line set, the second line set and the third line set respectively correspond to three light spots in the first image. When determining the third matrix, the calibration device may determine a first coordinate point, a second coordinate point, and a third coordinate point based on three light spots respectively corresponding to the first line set, the second line set, and the third line set in the first image, and determine a coordinate matrix of a plane where the first coordinate point, the second coordinate point, and the third coordinate point are located as the third matrix.
See in particular fig. 3. In fig. 3 (a), the above-described ultrasonic calibration apparatus may be simplified to the ultrasonic calibration apparatus 31 in fig. 3 for ease of understanding; although not shown in fig. 3, the positioning device (i.e., the first positioning device in the foregoing description) that is observed by the optical tracking device to determine the specific coordinate position of the ultrasonic calibration device 31 in the world space coordinate system is further provided on the ultrasonic calibration device 31. Whereas the ultrasound probe 30 in fig. 3 may be the ultrasound probe in the above-mentioned ultrasound apparatus, the positioning means 301 arranged thereon may be the above-mentioned second positioning means. In the process of calibrating the ultrasonic device to obtain the third matrix, the ultrasonic probe 30 needs to be placed in the ultrasonic calibration device 31, and the relative position between the ultrasonic device (or the ultrasonic probe on the ultrasonic device) and the ultrasonic calibration device is kept constant. In the ultrasonic calibration apparatus 31, there is a wiring region, and as shown in (a) of fig. 3, in the ultrasonic calibration apparatus 31, the line group 311 and the line group 312 arranged in an "N" shape are lines in the above wiring region, and specifically, the lines may be nylon lines. In addition, in fig. 3 (a), only two sets of the line groups arranged in the "N" word structure are exemplarily shown, and in a practical scene, at least three sets of the line groups arranged in the "N" word structure may be arranged in the above-described wiring area. In each of the line groups arranged in the "N" shape, there are two parallel line segments (for example, line segment p1p2 and line segment p3p4 in line group 311) and a line segment intersecting the two parallel lines (for example, line segment p2p3 in line group 311), and coordinates of these points in the ultrasonic calibration apparatus 31 (for example, points p1 to p4 shown in fig. 3 a) are determined in the design and production process of the ultrasonic apparatus.
When the ultrasound probe 30 scans a wiring area in the ultrasound calibration apparatus, the image cross section of the ultrasound calibration apparatus and the line of the wiring area have intersections in a three-dimensional space, and the intersections are displayed in the first image in the form of light spots. Reference may be made specifically to fig. 3 (B) and (C).
As shown in fig. 3 (B), the image 32 may be the first image. Here, assuming that the ultrasonic apparatus scans the 3 groups of lines arranged in the N-shaped structure in the wiring area, the three groups of lines include the line group 311 in the foregoing description, it is known in connection with the foregoing description that 9 white spots should exist in the first image, that is, the points p5-p13 shown in (B) in 3.
Here, the points p5, p6, and p7 are assumed to be imaging points of the line group 311 in the first image. Fig. 3 (C) shows a principle that the line group 311 forms spots (i.e., the point p5, the point p6, and the point p 7) on the above-described first image. As shown in fig. 3 (C), the plane 313 is a plane where the line group 311 of the ultrasonic calibration apparatus 31 is located, and the plane 314 is an image section of the ultrasonic probe 30. Plane 313 and plane 314 intersect in space, and point p5', point p6' and point p7' are all on the intersection of the two planes. According to the ultrasound imaging principle, the points p5', p6' and p7' form spots in the imaging interface of the ultrasound probe 30, i.e., the points p5, p6 and p7 shown in (B) of fig. 3.
After the image 32 is obtained, the distance |d1| between the points p5 and p6, and the distance |d2| between the points p6 and p7 can be determined from the image, and since the line segment p2p3 in the line group 311 intersects the line segment p1p2 and the line segment p3p4 with the point p2 and the point p3, respectively, the specific position of the point p6' on the line segment p2p3 can be determined according to the ratio of |d1| to |d2|; for example, when the ratio of |d1| to |d2| is 1:1, the point p6' is the midpoint of the line segment p2p3 according to the basic common sense of plane geometry. Also because the coordinates of point p2 and point p3 in the ultrasound correction device are determined, the coordinates of point p6 'in the ultrasound correction device can be determined based on the specific location of point p6' on line segment p2p3 and the coordinates of point p2 and point p3 in the ultrasound correction device.
The point p6' is a point on the plane 314, so that if two other points on the plane 314 can be determined again, the spatial position of the plane 314 in the ultrasound calibration apparatus 31 can be determined. Thus, according to the above method, the spatial coordinates in the ultrasound calibration apparatus 31 of the other two points on the plane 314 can be further found based on the other two sets of spots in the image 32, i.e. the point p8, the point p9, the point p10 and the points p11, p12, p13, and based on these two points and the point p6', the spatial position of the image section of the ultrasound probe 30 in the ultrasound calibration apparatus 31 can be determined. In particular, the spatial position of the image section of the ultrasound probe 30 in the ultrasound calibration apparatus 31 may also be represented by a fourth-order matrix, i.e. the third matrix in the previous description. It will be appreciated that the fourth order matrix represents the spatial position of the image cross-section relative to the ultrasound calibration apparatus 31 and not the spatial position of the image cross-section in the world coordinate system determined in the NDI coordinate system, but based on the fourth order matrix and the spatial position of the ultrasound calibration apparatus in the world coordinate system determined in the NDI coordinate system, the spatial position of the image cross-section in the world coordinate system determined in the NDI coordinate system may be determined.
202. The calibration device uses the calibration matrix to calibrate the spatial coordinates of the target image.
The target image is an image obtained by scanning the ultrasonic equipment.
Optionally, the target image is an ultrasound image of a continuous frame in a video stream, where the video stream is a video stream obtained by scanning the target object by the ultrasound device in any direction where the target object is located. That is, when modeling with the ultrasound images of the continuous frame, the ultrasound images of the continuous frame may include images taken at a plurality of orientations of the target object. Therefore, when the three-dimensional modeling is carried out on the target object, the model reconstruction in a larger range and the flexible model splicing can be realized, and the obtained model is more accurate.
In an alternative embodiment, before calibrating the spatial coordinates of the target image using the calibration matrix, the calibration device may determine, based on the calibration matrix, whether a deviation between the spatial position of the image section of the ultrasound device determined based on the positioning means of the ultrasound device and the real spatial position of the image section is sufficiently large. When this deviation is small, the accuracy of the resulting model is not substantially affected even if the spatial coordinates of the image are not calibrated with the calibration matrix. In contrast, if the calibration matrix is still used to calibrate the spatial coordinates of the ultrasound image used for subsequent modeling, the processing load of the device is certainly increased, and the modeling efficiency is reduced. Specifically, the calibration device may determine differences between elements in the calibration matrix and corresponding elements in the same-order identity matrix; and under the condition that the difference value between each element in the calibration matrix and the corresponding element in the same-order identity matrix is larger than a first threshold value, calibrating the space coordinates of the image scanned by the ultrasonic equipment by using the calibration matrix. Alternatively, the first threshold may be set to 0.001.
In an optional embodiment, the target image is included in a multi-frame image obtained by scanning the target object by the ultrasound device, in order to obtain three-dimensional data from two-dimensional data and avoid model ambiguity caused by too sparse three-dimensional data obtained based on the two-dimensional data, after the calibration matrix is used to calibrate the spatial coordinates of the target image, the calibration device may perform pixel spatial mapping and interpolation processing on each of the multi-frame images to obtain three-dimensional data representing the spatial physical characteristics of the target object, and the spatial coordinates of any one of the multi-frame images may be calibrated based on the calibration matrix. After the three-dimensional data is obtained, the calibration device may model based on the three-dimensional data to obtain a three-dimensional model of the target object.
Therefore, in the method, the calibration matrix is determined based on the first matrix and the second matrix, and the spatial coordinates of the image (including the target image) obtained by the subsequent scanning of the ultrasonic device are calibrated based on the calibration matrix, so that the authenticity and the accuracy of the spatial coordinates of the image obtained by the subsequent scanning can be ensured, and when the target object is subjected to three-dimensional modeling based on the ultrasonic image, the accuracy of the obtained model is improved, and medical staff can be helped to diagnose the illness state of a patient better.
Based on the above coordinate calibration method, the present application also provides a coordinate calibration system, which can be used to implement the coordinate calibration method in fig. 2. Fig. 4 is a physical diagram of a coordinate calibration system according to an embodiment of the present application. As shown in fig. 4, the coordinate calibration system may include: the ultrasound apparatus 40 (only the ultrasound probe portion is shown in fig. 4), the ultrasound calibration apparatus 41, the optical tracking apparatus 42, and the calibration apparatus 43, the ultrasound calibration apparatus 41 being provided with a first positioning device 411 and a wiring area, and the ultrasound apparatus 40 being provided with a second positioning device 401. Wherein: an ultrasonic device 40 for scanning the wiring area in the ultrasonic calibration device 41 to obtain a first image; the optical tracking device 42 is configured to photograph the first positioning device 411 to obtain first position information, and to photograph the second positioning device 401 to obtain second position information; a calibration device 43 for determining a first matrix based on the first position information and the first image and for determining a second matrix based on the second position information; the calibration device 43 is further configured to determine a transformation matrix based on the first matrix and the second matrix, and calibrate the spatial coordinates of the target image scanned by the ultrasound device 40 using the calibration matrix. Specific:
The ultrasonic device 40 may be an ultrasonic device such as an a-type ultrasonic device, a B-type ultrasonic device, an M-type ultrasonic device, or a D-type ultrasonic device; in particular, the ultrasound device 40 may be the ultrasound device described above in connection with FIG. 2. An optical ball holder, i.e. a second positioning means 401, is mounted on the ultrasound device 40. It should be noted that the second positioning device 401 needs to be stably fixed in the ultrasonic apparatus 40, that is, the relative position of the two cannot be changed.
The ultrasonic calibration apparatus 41 is also mounted with an optical ball holder, i.e. a first positioning means 411. In determining the spatial coordinates of the image section of the ultrasound device 40 in the world coordinate system based on the ultrasound calibration device 41, the ultrasound probe of the ultrasound device 40 may be placed inside the ultrasound calibration device 41, after which the relative position of the ultrasound calibration device 41 and the calibration device 43 needs to be kept unchanged, and the second positioning means 401 and the first positioning means 411 need to be kept both within the view range of the calibration device 43.
The optical tracking device 42 may be an optical measurement system or an optical tracking system for human motion tracking provided by NDI suppliers. In calibrating the ultrasound image coordinates of the ultrasound apparatus 40, the optical tracking apparatus 42 may determine the spatial coordinates of the image section of the ultrasound apparatus 40 in the world coordinate system by directly observing the second positioning device 401, and determine the spatial coordinates of the ultrasound calibration apparatus 41 in the world coordinate system by directly observing the first positioning device 411.
The calibration device 43 is communicatively connected to the ultrasound device 40, the optical tracking device 42. Optionally, a display 421 for displaying ultrasound images and creating a 3D model based on the ultrasound images may also be provided on the calibration device 43. In particular, the calibration device 43 may be the calibration device of fig. 2. Further, the calibration device may be further configured to perform three-dimensional modeling on the image scanned by the target object according to the ultrasound device 40, so as to obtain a three-dimensional model of the target object.
It should be understood that the division of the devices in the above system is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. For example, the calibration device 43 may be integrated with the ultrasound device 40, the optical tracking device 42 as one device, that is, the ultrasound device 40 may also be hosted by the calibration device 43, and the ultrasound probe of the ultrasound device 40 may be directly connected to the interface on the calibration device 43 via the data line 402. Fig. 4 illustrates only an exemplary architecture of the coordinate detection system provided herein and placement locations of various devices and elements, which should not be construed as limiting embodiments of the present application.
In calibrating the ultrasonic device, the ultrasonic device 40 may first scan the wiring area of the ultrasonic calibration device 41 to obtain a first image, the calibration device 43 may determine a third matrix according to the imaging position of each line in the wiring area in the first image, where the third matrix characterizes the spatial position of the image section of the ultrasonic device 40 in the ultrasonic calibration device 41, and the optical tracking device 42 may determine a fourth matrix based on the first positioning device 411, where the fourth matrix characterizes the spatial position of the ultrasonic calibration device 41 in the world coordinate system. The calibration device 43 may then determine a first matrix from said third and fourth matrices, which first matrix may represent the actual spatial position of the image section of the ultrasound device 40 within the world coordinate system. Furthermore, the optical tracking device 42 may determine a second matrix based on the second positioning means 401, which second matrix characterizes the spatial position of the image section of the ultrasound device 40 within the world coordinate system, but which spatial position deviates from the actual spatial position by a certain amount. Thus, the calibration device 43 may determine a calibration matrix based on the first matrix and the second matrix, and calibrate the spatial coordinates of the image (including the target image) obtained by the subsequent scanning of the ultrasound device based on the calibration matrix. Specific processes may refer to the foregoing related descriptions, and are not repeated here.
After the coordinate calibration system provided by the embodiment of the application is utilized to calibrate the ultrasonic image, the authenticity and the accuracy of the space coordinates of the image obtained by subsequent scanning can be ensured, and when the three-dimensional modeling is carried out on the target object based on the ultrasonic image, the accuracy of the obtained model is improved, so that medical staff can be helped to diagnose the illness state of a patient better.
Next, a schematic structural diagram of a coordinate calibration device provided in an embodiment of the present application is described, and please refer to fig. 5. As shown in fig. 5, the coordinate calibration apparatus of fig. 5 may perform the flow of the coordinate calibration method of fig. 2, and the apparatus includes:
a determining unit 501, configured to determine a calibration matrix according to a first matrix and a second matrix, where the first matrix and the second matrix each represent a spatial position of a current image section of an ultrasound device in a world coordinate system, the first matrix is determined based on the ultrasound calibration device, and the second matrix is determined based on an optical tracking device and the ultrasound device; and the calibration unit 502 is configured to calibrate spatial coordinates of a target image by using the calibration matrix, where the target image is an image obtained by scanning the ultrasound device.
With reference to the second aspect, in a possible implementation manner, the ultrasonic calibration apparatus is provided with a wiring area, and the device further includes: an acquiring unit 503, configured to acquire a first image, where the first image is an image obtained by scanning the wiring area by the ultrasonic device; a positioning unit 504, configured to determine a third matrix according to an imaging position of each line in the wiring area in the first image, where the third matrix characterizes a spatial position of an image section of the ultrasound device in an ultrasound calibration device; the determining unit is further configured to determine the first matrix according to the third matrix and a fourth matrix, where the fourth matrix characterizes a spatial position of the ultrasound calibration apparatus in a world coordinate system.
With reference to the second aspect, in a possible implementation manner, the calibration unit is specifically configured to: determining differences between elements in the calibration matrix and corresponding elements in the same-order identity matrix; and under the condition that the difference value between each element in the calibration matrix and the corresponding element in the same-order identity matrix is larger than a first threshold value, calibrating the space coordinates of the image scanned by the ultrasonic equipment by using the calibration matrix.
With reference to the second aspect, in one possible implementation manner, the wiring area is provided with a first line group, a second line group and a third line group, where the first line group, the second line group and the third line group are all arranged in an N-shaped structure, and the first line group, the second line group and the third line group respectively correspond to three light spots in the first image, and the positioning unit is specifically configured to: determining a first coordinate point, a second coordinate point and a third coordinate point based on three light spots respectively corresponding to the first line group, the second line group and the third line group in the first image; and determining the coordinate matrixes in the planes of the first coordinate point, the second coordinate point and the third coordinate point as the third matrix.
With reference to the second aspect, in a possible implementation manner, the target image is included in a multi-frame image obtained by scanning the target object by the ultrasound device, and the apparatus further includes: the conversion unit 505 is configured to perform pixel space mapping and interpolation processing on each frame of image in the multiple frames of images to obtain three-dimensional data representing spatial physical characteristics of the target object, where spatial coordinates of any frame of image in the multiple frames of images are obtained by calibration based on the calibration matrix; and the modeling unit is used for modeling based on the three-dimensional data to obtain a three-dimensional model of the target object.
With reference to the second aspect, in one possible implementation manner, the target image is an ultrasound image of a continuous frame in a video stream, and the video stream is a video stream obtained by scanning, by the ultrasound device, the target object in any direction where the target object is located.
It should be understood that the above division of the units of the image processing apparatus is merely a division of a logic function, and may be fully or partially integrated into one physical entity or may be physically separated. For example, the above units may be processing elements that are individually set up, may be implemented in the same chip, or may be stored in a memory unit of the controller in the form of program codes, and the functions of the above units may be called and executed by a certain processing element of the processor. In addition, the units can be integrated together or can be independently realized. The processing element here may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the method or the above units may be implemented by integrated logic circuits of hardware in a processor element or instructions in the form of software. The processing element may be a general purpose processor, such as a CPU, or may be one or more integrated circuits configured to implement the above methods, such as: one or more application-specific integrated circuits (ASIC), or one or more microprocessors (digital signal processor, DSP), or one or more field-programmable gate arrays (FPGA), etc.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 6, the electronic device 60 includes a processor 601, a memory 602, and a communication interface 603; the processor 601, the memory 602, and the communication interface 603 are connected to each other through a bus 604. Specifically, the electronic device 60 may be the electronic device described in the foregoing description.
Memory 602 includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (ROM), erasable programmable read-only memory (erasable programableread only memory, EPROM), or portable read-only memory (compact disc read-only memory, CDROM), with memory 602 for associated instructions and data. The communication interface 603 is used to receive and transmit data. Specifically, the communication interface 603 may be used to implement the functions of the acquisition unit 503 in fig. 5.
The processor 601 may be one or more central processing units (central processing unit, CPU), and in the case where the processor 601 is one CPU, the CPU may be a single-core CPU or a multi-core CPU. In particular, the processor 601 may implement the functions of the determining unit 501, the calibrating unit 502, the locating unit 504 and the converting unit 505 in fig. 5.
Another computer-readable storage medium is provided in an embodiment of the present application, the computer-readable storage medium storing a computer program that, when executed by a processor, implements: determining a calibration matrix according to a first matrix and a second matrix, wherein the first matrix and the second matrix are used for representing the spatial position of an image section of the ultrasonic equipment in a world coordinate system, the first matrix is determined based on the ultrasonic calibration equipment, and the second matrix is determined based on the optical tracking equipment and the ultrasonic equipment; and calibrating the space coordinates of a target image by using the calibration matrix, wherein the target image is an image scanned by the ultrasonic equipment.
As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …" depending on the context. Similarly, the phrase "at the time of determination …" or "if detected (a stated condition or event)" may be interpreted to mean "if determined …" or "in response to determination …" or "at the time of detection (a stated condition or event)" or "in response to detection (a stated condition or event)" depending on the context.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.

Claims (10)

1. A method of calibrating coordinates, comprising:
determining a calibration matrix according to a first matrix and a second matrix, wherein the first matrix and the second matrix both represent the spatial positions of image sections of ultrasonic equipment in a world coordinate system, the first matrix is determined based on the ultrasonic calibration equipment, and the second matrix is determined based on the optical tracking equipment and the ultrasonic equipment;
and calibrating the space coordinates of a target image by using the calibration matrix, wherein the target image is an image obtained by scanning the ultrasonic equipment.
2. The method of claim 1, wherein the ultrasound calibration apparatus is provided with a wiring area, the method further comprising, prior to said determining the calibration matrix from the first matrix and the second matrix:
Acquiring a first image, wherein the first image is an image obtained by scanning the wiring area by the ultrasonic equipment;
determining a third matrix according to imaging positions of lines in the wiring area in the first image, wherein the third matrix represents the spatial position of an image section of the ultrasonic device in the ultrasonic calibration device;
and determining the first matrix according to the third matrix and a fourth matrix, wherein the fourth matrix represents the spatial position of the ultrasonic calibration equipment in a world coordinate system.
3. The method of claim 1 or 2, wherein calibrating the spatial coordinates of the scanned image of the ultrasound device using the calibration matrix comprises:
determining differences between elements in the calibration matrix and corresponding elements in the same-order identity matrix;
and under the condition that the difference value between each element in the calibration matrix and the corresponding element in the same-order identity matrix is larger than a first threshold value, calibrating the space coordinates of the image scanned by the ultrasonic equipment by using the calibration matrix.
4. A method according to claim 2 or 3, wherein a first wire set, a second wire set and a third wire set are provided in the routing area, the first wire set, the second wire set and the third wire set are each arranged in an "N" shape, and the first wire set, the second wire set and the third wire set each correspond to three spots in the first image, the determining a third matrix according to imaging positions of lines in the routing area in the first image comprising:
Determining a first coordinate point, a second coordinate point and a third coordinate point based on three light spots respectively corresponding to the first line group, the second line group and the third line group in the first image;
and determining the coordinate matrixes in the planes of the first coordinate point, the second coordinate point and the third coordinate point as the third matrix.
5. The method of any of claims 1-4, wherein the target image is comprised of a plurality of frames of images of the target object scanned by the ultrasound device, the method further comprising, after calibrating the spatial coordinates of the target image using the calibration matrix:
performing pixel space mapping and interpolation processing on each frame of image in the multi-frame images to obtain three-dimensional data representing the spatial physical characteristics of the target object, wherein the spatial coordinates of any frame of image in the multi-frame images are obtained based on the calibration matrix;
modeling is carried out based on the three-dimensional data, and a three-dimensional model of the target object is obtained.
6. The method according to any one of claims 1 to 5, wherein the target image is an ultrasound image of consecutive frames in a video stream, and the video stream is a video stream obtained by scanning the target object by the ultrasound device in any direction where the target object is located.
7. A coordinate calibration apparatus, comprising:
the device comprises a determining unit, a calibration unit and a control unit, wherein the determining unit is used for determining a calibration matrix according to a first matrix and a second matrix, the first matrix and the second matrix are used for representing the spatial position of the current image section of the ultrasonic device in a world coordinate system, the first matrix is determined based on the ultrasonic calibration device, and the second matrix is determined based on the optical tracking device and the ultrasonic device;
and the calibration unit is used for calibrating the space coordinates of a target image by using the calibration matrix, wherein the target image is an image obtained by scanning the ultrasonic equipment.
8. The utility model provides a coordinate calibration system, its characterized in that includes ultrasonic equipment, ultrasonic calibration equipment, optical tracking equipment, and calibration equipment, ultrasonic calibration equipment is equipped with first positioner and wiring area, ultrasonic equipment is equipped with second positioner, wherein:
the ultrasonic equipment is used for scanning the wiring area in the ultrasonic calibration equipment to obtain a first image;
the optical tracking equipment is used for shooting the first positioning device to obtain first position information, and shooting the second positioning device to obtain second position information;
The calibration device is configured to determine a first matrix based on the first location information and the first image, and to determine a second matrix based on the second location information;
the calibration device is further used for determining a transformation matrix based on the first matrix and the second matrix, and calibrating the space coordinates of the target image scanned by the ultrasonic device by using the calibration matrix.
9. An electronic device, the electronic device comprising: one or more processors, memory, and a display screen;
the memory is coupled with the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors invoke to cause the electronic device to perform the method of any of claims 1-6.
10. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-6.
CN202310209076.4A 2023-02-24 2023-02-24 Coordinate calibration method and electronic equipment Pending CN116206061A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310209076.4A CN116206061A (en) 2023-02-24 2023-02-24 Coordinate calibration method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310209076.4A CN116206061A (en) 2023-02-24 2023-02-24 Coordinate calibration method and electronic equipment

Publications (1)

Publication Number Publication Date
CN116206061A true CN116206061A (en) 2023-06-02

Family

ID=86518970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310209076.4A Pending CN116206061A (en) 2023-02-24 2023-02-24 Coordinate calibration method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116206061A (en)

Similar Documents

Publication Publication Date Title
CN109949899B (en) Image three-dimensional measurement method, electronic device, storage medium, and program product
CN108734744B (en) Long-distance large-view-field binocular calibration method based on total station
JP5746477B2 (en) Model generation device, three-dimensional measurement device, control method thereof, and program
JP4960754B2 (en) Information processing apparatus and information processing method
US11270465B2 (en) Multiple camera calibration
US20120126812A1 (en) Magnetic resonance imaging apparatus and magnetic resonance imaging method
CN110664484A (en) Space registration method and system for robot and image equipment
CN110415286B (en) External parameter calibration method of multi-flight time depth camera system
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
CN113543718A (en) Apparatus and method for determining motion of an ultrasound probe including front-to-back directionality
CN109410282B (en) Method and device for detecting and tracking transfusion rod
JP6288770B2 (en) Face detection method, face detection system, and face detection program
US20030154201A1 (en) Data storage format for topography data
WO2021070415A1 (en) Correction parameter calculation method, displacement amount calculation method, correction parameter calculation device, and displacement amount calculation device
CN109938841B (en) Surgical instrument navigation system based on multi-view camera coordinate fusion
CN116206061A (en) Coordinate calibration method and electronic equipment
EP3336799B1 (en) Image processing apparatus and image processing method combining views of the same subject taken at different ranges
US20220079561A1 (en) Three-dimensional ultrasound image generation apparatus, three-dimensional ultrasound image generation method, and three-dimensional ultrasound image generation program
US20220087652A1 (en) Three-dimensional ultrasound imaging support apparatus, three-dimensional ultrasound imaging support method, and three-dimensional ultrasound imaging support program
CN115082373A (en) Method, device and equipment for testing medical image positioning line and storage medium
CN110368026B (en) Operation auxiliary device and system
CN112472132A (en) Device and method for positioning imaging area and medical imaging device
US20220284602A1 (en) Systems and methods for enhancement of 3d imagery and navigation via integration of patient motion data
CN115908121B (en) Endoscope registration method, device and calibration system
CN111110348B (en) Positioning method, positioning device, positioning system and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination