CN116363303A - Three-dimensional reconstruction method, device, equipment and medium for multi-view two-dimensional ultrasonic image - Google Patents

Three-dimensional reconstruction method, device, equipment and medium for multi-view two-dimensional ultrasonic image Download PDF

Info

Publication number
CN116363303A
CN116363303A CN202310219770.4A CN202310219770A CN116363303A CN 116363303 A CN116363303 A CN 116363303A CN 202310219770 A CN202310219770 A CN 202310219770A CN 116363303 A CN116363303 A CN 116363303A
Authority
CN
China
Prior art keywords
auxiliary
matrix
voxel
main
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310219770.4A
Other languages
Chinese (zh)
Inventor
戴亚康
孙欣
钱卫庆
郑毅
胡冀苏
周志勇
钱旭升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Biomedical Engineering and Technology of CAS
Original Assignee
Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Biomedical Engineering and Technology of CAS filed Critical Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority to CN202310219770.4A priority Critical patent/CN116363303A/en
Publication of CN116363303A publication Critical patent/CN116363303A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Software Systems (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The application relates to a three-dimensional reconstruction method, device, equipment and medium of a multi-view two-dimensional ultrasonic image, and in particular relates to the technical field of medical image processing. The method comprises the following steps: acquiring two-dimensional ultrasonic data acquired by an ultrasonic probe under a main view angle and an auxiliary view angle to obtain a main view angle image set and an auxiliary view angle image set; converting the main visual angle image set and the auxiliary visual angle image set from an ultrasonic coordinate system to a view coordinate system to obtain a main body pixel matrix and an auxiliary voxel matrix; performing weighted calculation on the main body pixel matrix and the auxiliary voxel matrix to obtain a main and auxiliary voxel matrix; and carrying out three-dimensional reconstruction on the main and auxiliary voxel matrixes to obtain a three-dimensional reconstruction result. Based on the technical scheme provided by the application, the range and depth of ultrasonic reconstruction are increased by providing the two-dimensional ultrasonic data of a plurality of visual angles, so that the accuracy of three-dimensional reconstruction is improved.

Description

Three-dimensional reconstruction method, device, equipment and medium for multi-view two-dimensional ultrasonic image
Technical Field
The invention relates to the technical field of medical image processing, in particular to a three-dimensional reconstruction method, device, equipment and medium of a multi-view two-dimensional ultrasonic image.
Background
The technique of three-dimensional reconstruction of two-dimensional ultrasound images is widely used in diagnostic-assisted settings.
The existing commonly used three-dimensional reconstruction method of the two-dimensional ultrasonic image can use a handheld two-dimensional ultrasonic probe to acquire image information, and then collect space-time information of the image in other positioning modes, so that three-dimensional restoration is realized.
Based on the technical scheme, due to the influence of factors such as the ultrasonic acquisition frequency, the manual scanning angle, the speed offset and the like, the condition of uneven distribution of the reconstructed original data layer surface is easy to occur, and the three-dimensional reconstruction effect is influenced.
Disclosure of Invention
The application provides a three-dimensional reconstruction method, device, equipment and medium of a multi-view two-dimensional ultrasonic image.
In one aspect, a method for three-dimensional reconstruction of a multi-view two-dimensional ultrasound image is provided, the method comprising:
acquiring two-dimensional ultrasonic data acquired by an ultrasonic probe under a main view angle and an auxiliary view angle to obtain a main view angle image set and an auxiliary view angle image set;
converting the main visual angle image set and the auxiliary visual angle image set from an ultrasonic coordinate system to a view coordinate system to obtain a main body pixel matrix and an auxiliary voxel matrix;
performing weighted calculation on the main body pixel matrix and the auxiliary voxel matrix to obtain a main and auxiliary voxel matrix;
and carrying out three-dimensional reconstruction on the main and auxiliary voxel matrixes to obtain a three-dimensional reconstruction result.
In yet another aspect, there is provided a three-dimensional reconstruction apparatus of a multi-view two-dimensional ultrasound image, the apparatus comprising:
the two-dimensional ultrasonic data acquisition module is used for acquiring two-dimensional ultrasonic data acquired by the ultrasonic probe under the main view angle and the auxiliary view angle to obtain a main view angle image set and an auxiliary view angle image set;
the voxel matrix acquisition module is used for converting the main visual angle image set and the auxiliary visual angle image set from an ultrasonic coordinate system to a view coordinate system to obtain a main body pixel matrix and an auxiliary voxel matrix;
the matrix weighting calculation module is used for carrying out weighting calculation on the main body pixel matrix and the auxiliary voxel matrix to obtain a main voxel matrix and an auxiliary voxel matrix;
and the three-dimensional reconstruction module is used for carrying out three-dimensional reconstruction on the main voxel matrix and the auxiliary voxel matrix to obtain a three-dimensional reconstruction result.
In yet another aspect, a computer device is provided, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, at least one program, a code set, or an instruction set is loaded and executed by the processor to implement the three-dimensional reconstruction method of the multi-view two-dimensional ultrasound image described above.
In yet another aspect, a computer readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein is provided, the at least one instruction, at least one program, a set of codes, or a set of instructions loaded and executed by a processor to implement the three-dimensional reconstruction method of a multi-view two-dimensional ultrasound image as described above.
In yet another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the three-dimensional reconstruction method of the multi-view two-dimensional ultrasound image.
The technical scheme that this application provided can include following beneficial effect:
the method comprises the steps of obtaining two-dimensional ultrasonic data acquired by an ultrasonic probe under a main view angle and an auxiliary view angle, converting the obtained main view angle image set and auxiliary view angle image set from an ultrasonic coordinate system to a view coordinate system to obtain a main body element matrix and an auxiliary voxel matrix, carrying out weighted calculation on the main body element matrix and the auxiliary voxel matrix to obtain a main auxiliary voxel matrix, carrying out three-dimensional reconstruction on the main and auxiliary voxel matrices fused with different view angles to obtain a three-dimensional reconstruction result, and providing two-dimensional ultrasonic data of a plurality of view angles, so that the range and depth of ultrasonic reconstruction are increased, the defect that after reconstruction, original data layers are unevenly distributed and even larger angle deviation appears due to the deviation of ultrasonic acquisition frequency, manual scanning angle and speed is overcome, and the accuracy of three-dimensional reconstruction is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a three-dimensional reconstruction apparatus according to an exemplary embodiment.
Fig. 2 is a method flow diagram illustrating a method of three-dimensional reconstruction of a multi-view two-dimensional ultrasound image, according to an exemplary embodiment.
Fig. 3 is a method flow diagram illustrating a method of three-dimensional reconstruction of a multi-view two-dimensional ultrasound image, according to an exemplary embodiment.
Fig. 4 is a schematic diagram illustrating two-dimensional ultrasound data acquisition according to an exemplary embodiment.
Fig. 5 is a method flow diagram illustrating a method of three-dimensional reconstruction of a multi-view two-dimensional ultrasound image, according to an exemplary embodiment.
FIG. 6 is a schematic diagram illustrating bicubic interpolation according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating a structure of a three-dimensional reconstruction apparatus of a multi-view two-dimensional ultrasound image according to an exemplary embodiment.
Fig. 8 is a schematic diagram of a computer device provided in accordance with an exemplary embodiment.
Detailed Description
The following description of the embodiments of the present application will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be understood that, in the embodiments of the present application, the "indication" may be a direct indication, an indirect indication, or an indication having an association relationship. For example, a indicates B, which may mean that a indicates B directly, e.g., B may be obtained by a; it may also indicate that a indicates B indirectly, e.g. a indicates C, B may be obtained by C; it may also be indicated that there is an association between a and B.
In the description of the embodiments of the present application, the term "corresponding" may indicate that there is a direct correspondence or an indirect correspondence between the two, or may indicate that there is an association between the two, or may indicate a relationship between the two and the indicated, configured, or the like.
In the embodiment of the present application, the "predefining" may be implemented by pre-storing corresponding codes, tables or other manners that may be used to indicate relevant information in devices (including, for example, terminal devices and network devices), and the specific implementation of the present application is not limited.
The three-dimensional reconstruction method shown in the application can be applied to the three-dimensional reconstruction device shown in fig. 1. The three-dimensional reconstruction device comprises: portable ultrasonic diagnostic equipment, magnetic positioning clamp, magnetic positioning instrument, digital acquisition and reconstruction system. Based on the device, the process of three-dimensional reconstruction of the two-dimensional ultrasonic image is referred as follows:
(1) A portable ultrasonic diagnostic apparatus (comprising an ultrasonic probe and an ultrasonic system) is assembled, and a magnetic positioning clamp is arranged for the ultrasonic probe.
(2) And (3) assembling a digital acquisition and reconstruction system, and establishing communication with an ultrasonic system and magnetic positioning equipment.
(3) A magnetic positioning instrument (comprising a magnetic positioning receiver and a magnetic positioning transmitter) is assembled, wherein the magnetic positioning transmitter is placed at a spatially fixed position, and the magnetic positioning receiver is mounted on a magnetic positioning fixture.
(4) The two-dimensional ultrasonic data are collected at high frequency by using an ultrasonic probe, and meanwhile, the magnetic positioning data are collected by using a magnetic positioning instrument, and the two-dimensional ultrasonic data and the magnetic positioning data are transmitted to a digital collection and analysis system together.
(5) The digital acquisition and reconstruction system can be used for receiving a two-dimensional ultrasonic data set transmitted by the ultrasonic system and a magnetic positioning data set of the magnetic positioning instrument, and is used for interacting, processing and displaying a three-dimensional reconstructed image.
The three-dimensional reconstruction method for the multi-view two-dimensional ultrasonic image comprehensively considers images in a plurality of acquisition directions, and can rapidly and accurately reconstruct three-dimensions and display interaction. The technical scheme provided by the application is further described below.
Fig. 2 is a method flow diagram illustrating a method of three-dimensional reconstruction of a multi-view two-dimensional ultrasound image, according to an exemplary embodiment. The method is applied to the computer equipment. As shown in fig. 2, the three-dimensional reconstruction method of the multi-view two-dimensional ultrasound image may include the steps of:
step 210: acquiring two-dimensional ultrasonic data acquired by an ultrasonic probe under a main view angle and an auxiliary view angle, and obtaining a main view angle image set and an auxiliary view angle image set.
In the embodiment of the application, an ultrasonic probe is used for carrying out ultrasonic scanning on a detection object (such as four limbs), the ultrasonic probe is respectively arranged at different positions around the detection object, so that two different visual angles of a main visual angle and an auxiliary visual angle are formed, two-dimensional ultrasonic data acquired under the main visual angle are recorded as a main visual angle image set, and two-dimensional ultrasonic data acquired under the auxiliary visual angle are recorded as an auxiliary visual angle image set.
Step 220: and converting the main visual angle image set and the auxiliary visual angle image set from the ultrasonic coordinate system to the view coordinate system to obtain a main body pixel matrix and an auxiliary voxel matrix.
The ultrasonic coordinate system is a coordinate system corresponding to data acquired by the ultrasonic probe. The ultrasonic coordinate system is related to the structure of the ultrasonic probe, and along with the movement of the ultrasonic probe, that is, the coordinate system of the ultrasonic probe in the real physical space changes, in order to map the two-dimensional ultrasonic data to one coordinate system uniformly, a fixed coordinate system needs to be set for solving, and the coordinate system is a coordinate system which is actually integrated with the two-dimensional ultrasonic data, and is marked as a view coordinate system.
In the embodiment of the application, a main view image set and an auxiliary view image set obtained by ultrasonic scanning of an ultrasonic probe are subjected to coordinate conversion, specifically, the main view image set is converted into a view coordinate system from an ultrasonic coordinate system, the main view image set is converted into a main body pixel matrix, and the auxiliary view image set is converted into an auxiliary body pixel matrix.
Step 230: and carrying out weighted calculation on the main body pixel matrix and the auxiliary voxel matrix to obtain a main and auxiliary voxel matrix.
In the embodiment of the application, the main body pixel matrix and the auxiliary voxel matrix are weighted according to weights to obtain the main and auxiliary voxel matrices.
Step 240: and carrying out three-dimensional reconstruction on the main voxel matrix and the auxiliary voxel matrix to obtain a three-dimensional reconstruction result.
In the embodiment of the application, three-dimensional reconstruction is performed by using data comprising a main voxel matrix and a sub voxel matrix, so that a three-dimensional reconstruction result is obtained.
In summary, according to the three-dimensional reconstruction method for multi-view two-dimensional ultrasound images provided by the embodiment, two-dimensional ultrasound data acquired by an ultrasound probe under a main view and an auxiliary view are acquired, the acquired main view image set and auxiliary view image set are converted from an ultrasound coordinate system to a view coordinate system to obtain a main body pixel matrix and an auxiliary voxel matrix, then the main body pixel matrix and the auxiliary voxel matrix are subjected to weighted calculation to obtain a main sub voxel matrix, the main sub voxel matrix fused with different views is subjected to three-dimensional reconstruction to obtain a three-dimensional reconstruction result, and by providing two-dimensional ultrasound data of a plurality of views, the scope and depth of ultrasound reconstruction are increased, and the defect that the original data layer is unevenly distributed after reconstruction and even has larger angle offset due to the offset in ultrasound acquisition frequency, manual scanning angle and speed is overcome, so that the accuracy of three-dimensional reconstruction is improved.
In an exemplary embodiment, the main view angle and the auxiliary view angle are two view angles with opposite directions of the same coordinate axis, each view angle image set comprises data acquired by translating the ultrasonic probe along different coordinate axes, and matrix filling is performed by using the data acquired by translating the different coordinate axes, so as to obtain a voxel matrix under the corresponding view angle.
Fig. 3 is a method flow diagram illustrating a method of three-dimensional reconstruction of a multi-view two-dimensional ultrasound image, according to an exemplary embodiment. The method is applied to the computer equipment. As shown in fig. 3, the three-dimensional reconstruction method of the multi-view two-dimensional ultrasound image may include the steps of:
step 310: placing the ultrasonic probe in the negative direction of the first coordinate axis so that the ultrasonic probe is under the main visual angle; and respectively translating the ultrasonic probe along the second coordinate axis direction and the third coordinate axis direction to acquire two-dimensional ultrasonic data, so as to obtain a first main visual angle image set and a second main visual angle image set.
Step 320: placing the ultrasonic probe in the positive direction of the first coordinate axis so that the ultrasonic probe is under the auxiliary view angle; and respectively translating the ultrasonic probe along the second coordinate axis direction and the third coordinate axis direction to acquire two-dimensional ultrasonic data, so as to obtain a first auxiliary visual angle image set and a second auxiliary visual angle image set.
For example, referring to fig. 4 in combination, taking an arm as an example, the back of the hand is directed upward, the ultrasonic probe is placed vertically on the arm, and the plane of the ultrasonic image perpendicular to the limbs is the Z axis; when in a main visual angle, the probe is positioned in the negative direction of the Y axis and is perpendicular to the XZ plane, and the acquired images are translated along the Z axis direction and the X axis direction to respectively form a first main visual angle image set and a second main visual angle image set; when the auxiliary visual angle is formed, the ultrasonic probe is positioned in the positive direction of the Y axis and is perpendicular to the XZ plane, and the acquired images are translated along the Z axis direction and the X axis direction to form a first auxiliary visual angle image set and a second auxiliary visual angle image set.
Step 330: converting the first main visual angle image set from an ultrasonic coordinate system to a view coordinate system, and storing the obtained first group of pixel points in a main body pixel matrix; and converting the second main visual angle image set from the ultrasonic coordinate system to the view coordinate system, and updating the data in the main body pixel matrix by using the obtained second group of pixel points to obtain a final main body pixel matrix.
In the embodiment of the application, a first main view image set of a main view is traversed, pixels are transferred to a view coordinate system through a conversion relation, the group of pixels are stored in a main body pixel matrix as original pixels, the pixels of a second main view image set of the main view are transferred to the view coordinate system through the conversion relation, and data in the main body pixel matrix are updated by using the group of pixels, so that a final main body pixel matrix is obtained.
In one possible implementation manner, updating the data in the main body pixel matrix by using the obtained second group of pixel points to obtain a final main body pixel matrix includes: traversing the pixel points in the second group of pixel points, and if the current pixel point does not exist in the main body pixel matrix, storing the current pixel point in the main body pixel matrix; if the current pixel exists in the main body pixel matrix, carrying out weighted average calculation on the current pixel and the existing pixel, and replacing the existing pixel in the main body pixel matrix with the obtained pixel.
For the second group of pixel points, the mapped points are added to the main body pixel matrix if the mapped points do not exist in the main body pixel matrix, and the mapped points are added to the main body pixel matrix according to w 1 And w is equal to 2 (initial preset value is 0.5, 0.5) the weighted average is calculated and then the original value is replaced.
Step 340: converting the second auxiliary view image set from the ultrasonic coordinate system to the view coordinate system, and storing the obtained third group of pixel points in an auxiliary voxel matrix; and converting the second auxiliary visual angle image set from the ultrasonic coordinate system to the view coordinate system, and updating the data in the auxiliary voxel matrix by using the obtained fourth group of pixel points to obtain a final auxiliary voxel matrix.
In the embodiment of the application, a first auxiliary view angle image set of an auxiliary view angle is traversed, pixel points are transferred to a view coordinate system through a conversion relation, the group of pixel points are stored in an auxiliary voxel matrix as original pixel points, then the pixel points of a second auxiliary view angle image set of the auxiliary view angle are transferred to the view coordinate system through the conversion relation, and data in the auxiliary voxel matrix are updated by using the group of pixel points to obtain a final auxiliary voxel matrix.
In one possible implementation, updating the data in the secondary voxel matrix with the obtained fourth set of pixel points to obtain a final secondary voxel matrix includes: traversing the pixel points in the fourth group of pixel points, and if the current pixel point does not exist in the auxiliary voxel matrix, storing the current pixel point in the auxiliary voxel matrix; if the current pixel point exists in the auxiliary voxel matrix, carrying out weighted average calculation on the current pixel point and the existing pixel point, and replacing the existing pixel point in the auxiliary voxel matrix with the obtained pixel point.
For example, for a fourth set of pixel points, if the mapped point is not present in the auxiliary voxel matrix, it is added to the auxiliary voxel matrix, if present, it is then as w 1 And w is equal to 2 (initial preset value is 0.5, 0.5) the weighted average is calculated and then the original value is replaced.
Step 350: traversing all voxels of the main body pixel matrix and the auxiliary voxel matrix in the negative direction of the first coordinate axis, and carrying out weighted calculation on the same voxels in the main body pixel matrix and the auxiliary voxel matrix according to a first weight strategy to obtain a first partial value in the main and auxiliary voxel matrices.
Step 360: traversing all voxels of the main body pixel matrix and the auxiliary voxel matrix in the positive direction of the first coordinate axis, and carrying out weighted calculation on the same voxels in the main body pixel matrix and the auxiliary voxel matrix according to a second weight strategy to obtain a second partial value in the main and auxiliary voxel matrices.
Wherein in the first weight strategy, the weight of the voxels in the main body pixel matrix is greater than the weight of the voxels in the auxiliary voxel matrix; in the second weight strategy, the weight of the voxels in the main voxel matrix is smaller than the weight of the voxels in the auxiliary voxel matrix.
Exemplary, all voxels in the negative direction of the first coordinate axis of the main body pixel matrix and the auxiliary voxel matrix are traversed first, and the weight values of the main body pixel matrix and the auxiliary voxel matrix are respectively w by adopting a first weight strategy 3 And w is equal to 4 (the initial preset value is 0.8 and 0.2), traversing all voxels in the positive direction of the first coordinate axis of the main body element matrix and the auxiliary voxel matrix according to weight weighting calculation, and adopting a second weight strategy, wherein the weight values of the main body element matrix and the auxiliary voxel matrix are respectively w 4 And w is equal to 3 (initial preset value is 0.2, 0.8), and weight weighting calculation is carried out.
Step 370: and carrying out three-dimensional reconstruction on the main voxel matrix and the auxiliary voxel matrix to obtain a three-dimensional reconstruction result.
In summary, in the three-dimensional reconstruction method of the multi-view two-dimensional ultrasound image provided by the embodiment, two opposite-oriented view angles of the first coordinate axis are taken as the main view angle and the auxiliary view angle, each view angle image set includes data acquired by translating the ultrasound probe along the second coordinate axis and the third coordinate axis, then matrix filling is performed by using the data acquired by translating the different coordinate axes, so as to obtain a voxel matrix under the corresponding view angle, and finally, through different weight strategies, the voxels of the two voxel matrices in the corresponding directions of the first coordinate axis are weighted, so that the two-dimensional ultrasound data acquired under the two view angles are effectively integrated, and an accurate main and auxiliary voxel matrix is obtained.
In an exemplary embodiment, three-dimensional reconstruction and filling are performed on a primary and secondary voxel matrix obtained by multi-view ultrasound.
Fig. 5 is a method flow diagram illustrating a method of three-dimensional reconstruction of a multi-view two-dimensional ultrasound image, according to an exemplary embodiment. The method is applied to the computer equipment. As shown in fig. 5, the above step 240 (or step 370) may alternatively be implemented as the following steps:
step 510: and scaling the main voxel matrix and the auxiliary voxel matrix from the view coordinate system to the reconstruction coordinate system in an equal proportion to obtain a reconstruction main voxel matrix and an auxiliary voxel matrix.
In the embodiment of the present application, the reconstruction coordinate system R is set as an equal proportion map of the view coordinate system V, P RRV P V ,P R Reconstructing coordinates in a coordinate system, P V Is the coordinate of the view coordinate system, T RV The primary and secondary voxel matrixes are converted into reconstructed primary and secondary voxel matrixes through the coordinate transformation.
It will be appreciated that the amount and accuracy of reconstruction may be varied by scaling the data equally, such as by increasing or decreasing the amount of data computation during interpolation, thereby speeding up computation or increasing reconstruction accuracy.
Step 520: and performing three-dimensional reconstruction on the reconstructed main and auxiliary voxel matrixes by adopting a bicubic interpolation method to obtain a three-dimensional reconstruction result.
In the embodiment of the present application, a bicubic interpolation method is adopted, as shown in fig. 6, 64 points closest to the reconstructed coordinate system R and having pixel values are searched along eight directions, equations are calculated along xyz three directions respectively, and the pixel values of the points to be calculated in the reconstructed coordinate system are solved:
Figure BDA0004116186840000101
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004116186840000102
a=-0.5。
in one possible implementation, step 520 includes: dividing a reconstructed main and auxiliary voxel matrix into a plurality of subareas; for each sub-region, performing three-dimensional reconstruction of a bicubic interpolation method in parallel through an independent GPU to obtain a sub-region three-dimensional reconstruction result; and fusing all the sub-region three-dimensional reconstruction results to obtain a final three-dimensional reconstruction result.
In the implementation mode, since the calculation among each voxel has no correlation, a multithreaded GPU acceleration mode can be adopted to divide the whole voxel into a plurality of sub-regions, and the sub-regions are respectively delivered to different threads to perform interpolation calculation at the same time, so that the reconstruction time is shortened.
In summary, according to the three-dimensional reconstruction method for the multi-view two-dimensional ultrasound image provided by the embodiment, the main voxel matrix and the auxiliary voxel matrix obtained by multi-view ultrasound are subjected to three-dimensional reconstruction and filling, and a multithreading GPU acceleration mode can be adopted in the reconstruction calculation process, so that the reconstruction time is shortened, and the reconstruction precision is improved.
In an exemplary embodiment, the coordinate conversion method of converting the ultrasound coordinate system into the view coordinate system may refer to the following procedure:
(1) Determining coordinate transformations of magnetic positioning receiver and magnetic positioning transmitter
Let the coordinate system of the magnetic positioning transmitter be W, the coordinate system is fixed and unchanged in a single experiment, the coordinate system of the magnetic positioning receiver be S, and the coordinate system can change along with the position movement or the rotation of the magnetic positioning receiver. Coordinate composition matrix P recorded in S coordinate system S Each column represents a coordinate point. At any time, the matrix formed by the coordinates in the W coordinate system is set as P W P is then W =T WS P S Wherein T is WS The six-degree-of-freedom data in the physical space W is obtained from the magnetic positioning receiver, and the six-degree-of-freedom data includes three translational components and three angular components, representing the offset and attitude of the magnetic positioning receiver relative to the origin of the physical space coordinate system.
(2) Determining coordinate transformations of an ultrasound probe and a magnetic positioning receiver
Let the ultrasonic coordinate system be U, change with the ultrasonic probe movement, the magnetic positioning receiver coordinate system be S. Coordinate composition matrix P recorded in S coordinate system S Each column represents a coordinate point. At any time, the matrix formed by the coordinates in the U coordinate system is set as P U P is then S =T SU P U Wherein T is SU The ultrasonic probe fixture can be calculated to be a fixed matrix, and comprises three translation components and three angle components, wherein the offset and the gesture of the origin of the ultrasonic probe relative to the origin of a sensor coordinate system are represented.
(3) Defining a view coordinate system
Let the view coordinate system be V, and record the coordinate composition matrix P under the V coordinate system V Each column represents a coordinate point. In order to make the view display range as close to the imaging range as practically needed, and simultaneously reduce the three-dimensional imaging as much as possibleThe ultrasonic coordinate system of the first frame or the middle frame can be selected as the view coordinate system according to the calculated amount, so that the view coordinate system of a single experiment is fixed, and the transformation from the magnetic positioning transmitter to the view coordinate is a fixed matrix, namely P W =T WS0 P S =T WS0 T SU P U0 =T WS0 T Su P V ,T WS0 Is T corresponding to a selected frame WS ,P U0 Is P corresponding to a selected frame u . The range of the values of the view coordinate system x and y are respectively in accordance with the imaging size I of the ultrasonic image W (imaging width) and I H The imaging height is taken as the maximum value, and the z-direction value can be expanded according to actual requirements.
(4) Conversion relation from ultrasonic coordinate system to view coordinate system
From P W =T WS0 T SU P V =T WS P S =T WS T SU P U Transformation matrix T from ultrasonic coordinate system to view coordinate system VU =(T WS0 T SU ) -1 T WS T SU
The method embodiments may be implemented alone or in combination, which is not limited in this application.
Fig. 7 is a block diagram illustrating a structure of a three-dimensional reconstruction apparatus of a multi-view two-dimensional ultrasound image according to an exemplary embodiment. The device comprises:
the two-dimensional ultrasonic data acquisition module 701 is configured to acquire two-dimensional ultrasonic data acquired by the ultrasonic probe under the main view angle and the auxiliary view angle, and obtain a main view angle image set and an auxiliary view angle image set;
the voxel matrix acquisition module 702 is configured to convert the main view image set and the auxiliary view image set from an ultrasound coordinate system to a view coordinate system to obtain a main body pixel matrix and an auxiliary voxel matrix;
a matrix weighting calculation module 703, configured to perform a weighting calculation on the main voxel matrix and the auxiliary voxel matrix to obtain a main voxel matrix and an auxiliary voxel matrix;
and the three-dimensional reconstruction module 704 is used for carrying out three-dimensional reconstruction on the main voxel matrix and the auxiliary voxel matrix to obtain a three-dimensional reconstruction result.
In one possible implementation, the two-dimensional ultrasound data acquisition module 701 is configured to:
placing the ultrasonic probe in the negative direction of a first coordinate axis so that the ultrasonic probe is under the main visual angle; translating the ultrasonic probe along the second coordinate axis direction and the third coordinate axis direction respectively to acquire two-dimensional ultrasonic data, so as to obtain a first main visual angle image set and a second main visual angle image set;
placing the ultrasonic probe in the positive direction of a first coordinate axis so that the ultrasonic probe is under the auxiliary view angle; and translating the ultrasonic probe along the second coordinate axis direction and the third coordinate axis direction respectively to acquire two-dimensional ultrasonic data, so as to obtain a first auxiliary visual angle image set and a second auxiliary visual angle image set.
In one possible implementation, the voxel matrix acquisition module 702 is configured to:
converting the first main visual angle image set from an ultrasonic coordinate system to a view coordinate system, and storing the obtained first group of pixel points in the main body pixel matrix; converting the second main visual angle image set from an ultrasonic coordinate system to a view coordinate system, and updating data in the main body pixel matrix by using the obtained second group of pixel points to obtain a final main body pixel matrix;
converting the second auxiliary view image set from an ultrasonic coordinate system to a view coordinate system, and storing the obtained third group of pixel points in the auxiliary voxel matrix; and converting the second auxiliary visual angle image set from an ultrasonic coordinate system to a view coordinate system, and updating data in the auxiliary voxel matrix by using the obtained fourth group of pixel points to obtain the final auxiliary voxel matrix.
In one possible implementation, the voxel matrix acquisition module 702 is configured to:
traversing the pixel points in the second group of pixel points, and if the current pixel point does not exist in the main body pixel matrix, storing the current pixel point in the main body pixel matrix; if the current pixel point exists in the main body pixel matrix, carrying out weighted average calculation on the current pixel point and the existing pixel point, and replacing the existing pixel point in the main body pixel matrix with the obtained pixel point;
traversing the pixel points in the fourth group of pixel points, and if the current pixel point does not exist in the auxiliary voxel matrix, storing the current pixel point in the auxiliary voxel matrix; if the current pixel point exists in the auxiliary voxel matrix, carrying out weighted average calculation on the current pixel point and the existing pixel point, and replacing the existing pixel point in the auxiliary voxel matrix with the obtained pixel point.
In one possible implementation, the matrix weighting calculation module 703 is configured to:
traversing all voxels of the main body pixel matrix and the auxiliary voxel matrix in the negative direction of a first coordinate axis, and carrying out weighted calculation on the same voxels in the main body pixel matrix and the auxiliary voxel matrix according to a first weight strategy to obtain a first partial value in the main and auxiliary voxel matrices;
traversing all voxels of the main body pixel matrix and the auxiliary voxel matrix in the positive direction of the first coordinate axis, and carrying out weighted calculation on the same voxels in the main body pixel matrix and the auxiliary voxel matrix according to a second weight strategy to obtain a second partial value in the main and auxiliary voxel matrices;
wherein in the first weight strategy, the weight of the voxels in the main body pixel matrix is greater than the weight of the voxels in the auxiliary voxel matrix; in the second weight strategy, the weight of the voxels in the main body pixel matrix is smaller than the weight of the voxels in the auxiliary voxel matrix.
In one possible implementation, the three-dimensional reconstruction module 704 is configured to:
scaling the main voxel matrix and the auxiliary voxel matrix from the view coordinate system to a reconstruction coordinate system in equal proportion to obtain a reconstruction main voxel matrix and an auxiliary voxel matrix;
and performing three-dimensional reconstruction on the reconstructed main and auxiliary voxel matrixes by adopting a bicubic interpolation method to obtain the three-dimensional reconstruction result.
In one possible implementation, the three-dimensional reconstruction module 704 is configured to:
dividing the reconstructed main and auxiliary voxel matrixes into a plurality of subareas;
for each sub-region, performing three-dimensional reconstruction of a bicubic interpolation method in parallel through an independent GPU to obtain a sub-region three-dimensional reconstruction result;
and fusing all the three-dimensional reconstruction results of the subareas to obtain a final three-dimensional reconstruction result.
It should be noted that: the three-dimensional reconstruction device for multi-view two-dimensional ultrasound images provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, i.e., the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Referring to fig. 8, a schematic diagram of a computer device according to an exemplary embodiment of the present application is provided, where the computer device includes a memory and a processor, and the memory is configured to store a computer program, and when the computer program is executed by the processor, the three-dimensional reconstruction method of the multi-view two-dimensional ultrasound image is implemented.
The processor may be a central processing unit (Central Processing Unit, CPU). The processor may also be any other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules, corresponding to the methods in embodiments of the present invention. The processor executes various functional applications of the processor and data processing, i.e., implements the methods of the method embodiments described above, by running non-transitory software programs, instructions, and modules stored in memory.
The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; the storage data area may store data created by the processor, etc. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some implementations, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In an exemplary embodiment, a computer readable storage medium is also provided for storing at least one computer program that is loaded and executed by a processor to implement all or part of the steps of the above method. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method for three-dimensional reconstruction of a multi-view two-dimensional ultrasound image, the method comprising:
acquiring two-dimensional ultrasonic data acquired by an ultrasonic probe under a main view angle and an auxiliary view angle to obtain a main view angle image set and an auxiliary view angle image set;
converting the main visual angle image set and the auxiliary visual angle image set from an ultrasonic coordinate system to a view coordinate system to obtain a main body pixel matrix and an auxiliary voxel matrix;
performing weighted calculation on the main body pixel matrix and the auxiliary voxel matrix to obtain a main and auxiliary voxel matrix;
and carrying out three-dimensional reconstruction on the main and auxiliary voxel matrixes to obtain a three-dimensional reconstruction result.
2. The method according to claim 1, wherein the acquiring the two-dimensional ultrasound data acquired by the ultrasound probe under the main view angle and the auxiliary view angle, to obtain a main view angle image set and an auxiliary view angle image set, includes:
placing the ultrasonic probe in the negative direction of a first coordinate axis so that the ultrasonic probe is under the main visual angle; translating the ultrasonic probe along the second coordinate axis direction and the third coordinate axis direction respectively to acquire two-dimensional ultrasonic data, so as to obtain a first main visual angle image set and a second main visual angle image set;
placing the ultrasonic probe in the positive direction of a first coordinate axis so that the ultrasonic probe is under the auxiliary view angle; and translating the ultrasonic probe along the second coordinate axis direction and the third coordinate axis direction respectively to acquire two-dimensional ultrasonic data, so as to obtain a first auxiliary visual angle image set and a second auxiliary visual angle image set.
3. The method of claim 2, wherein said converting the set of primary view images and the set of secondary view images from the ultrasound coordinate system to the view coordinate system to obtain a main body pixel matrix and a secondary voxel matrix comprises:
converting the first main visual angle image set from an ultrasonic coordinate system to a view coordinate system, and storing the obtained first group of pixel points in the main body pixel matrix; converting the second main visual angle image set from an ultrasonic coordinate system to a view coordinate system, and updating data in the main body pixel matrix by using the obtained second group of pixel points to obtain a final main body pixel matrix;
converting the second auxiliary view image set from an ultrasonic coordinate system to a view coordinate system, and storing the obtained third group of pixel points in the auxiliary voxel matrix; and converting the second auxiliary visual angle image set from an ultrasonic coordinate system to a view coordinate system, and updating data in the auxiliary voxel matrix by using the obtained fourth group of pixel points to obtain the final auxiliary voxel matrix.
4. The method of claim 3, wherein the step of,
the updating of the data in the main body pixel matrix by using the obtained second group of pixel points to obtain the final main body pixel matrix comprises the following steps:
traversing the pixel points in the second group of pixel points, and if the current pixel point does not exist in the main body pixel matrix, storing the current pixel point in the main body pixel matrix; if the current pixel point exists in the main body pixel matrix, carrying out weighted average calculation on the current pixel point and the existing pixel point, and replacing the existing pixel point in the main body pixel matrix with the obtained pixel point;
and updating the data in the auxiliary voxel matrix by using the obtained fourth group of pixel points to obtain a final auxiliary voxel matrix, wherein the updating comprises the following steps:
traversing the pixel points in the fourth group of pixel points, and if the current pixel point does not exist in the auxiliary voxel matrix, storing the current pixel point in the auxiliary voxel matrix; if the current pixel point exists in the auxiliary voxel matrix, carrying out weighted average calculation on the current pixel point and the existing pixel point, and replacing the existing pixel point in the auxiliary voxel matrix with the obtained pixel point.
5. The method according to claim 2, wherein the weighting calculation is performed on the main voxel matrix and the auxiliary voxel matrix to obtain a main voxel matrix and an auxiliary voxel matrix, including:
traversing all voxels of the main body pixel matrix and the auxiliary voxel matrix in the negative direction of a first coordinate axis, and carrying out weighted calculation on the same voxels in the main body pixel matrix and the auxiliary voxel matrix according to a first weight strategy to obtain a first partial value in the main and auxiliary voxel matrices;
traversing all voxels of the main body pixel matrix and the auxiliary voxel matrix in the positive direction of the first coordinate axis, and carrying out weighted calculation on the same voxels in the main body pixel matrix and the auxiliary voxel matrix according to a second weight strategy to obtain a second partial value in the main and auxiliary voxel matrices;
wherein in the first weight strategy, the weight of the voxels in the main body pixel matrix is greater than the weight of the voxels in the auxiliary voxel matrix; in the second weight strategy, the weight of the voxels in the main body pixel matrix is smaller than the weight of the voxels in the auxiliary voxel matrix.
6. The method according to claim 1, wherein the performing three-dimensional reconstruction on the primary and secondary voxel matrices to obtain three-dimensional reconstruction results includes:
scaling the main voxel matrix and the auxiliary voxel matrix from the view coordinate system to a reconstruction coordinate system in equal proportion to obtain a reconstruction main voxel matrix and an auxiliary voxel matrix;
and performing three-dimensional reconstruction on the reconstructed main and auxiliary voxel matrixes by adopting a bicubic interpolation method to obtain the three-dimensional reconstruction result.
7. The method of claim 6, wherein performing three-dimensional reconstruction on the reconstructed primary and secondary voxel matrix by using bicubic interpolation to obtain the three-dimensional reconstruction result comprises:
dividing the reconstructed main and auxiliary voxel matrixes into a plurality of subareas;
for each sub-region, performing three-dimensional reconstruction of a bicubic interpolation method in parallel through an independent GPU to obtain a sub-region three-dimensional reconstruction result;
and fusing all the three-dimensional reconstruction results of the subareas to obtain a final three-dimensional reconstruction result.
8. A three-dimensional reconstruction apparatus for multi-view two-dimensional ultrasound images, the apparatus comprising:
the two-dimensional ultrasonic data acquisition module is used for acquiring two-dimensional ultrasonic data acquired by the ultrasonic probe under the main view angle and the auxiliary view angle to obtain a main view angle image set and an auxiliary view angle image set;
the voxel matrix acquisition module is used for converting the main visual angle image set and the auxiliary visual angle image set from an ultrasonic coordinate system to a view coordinate system to obtain a main body pixel matrix and an auxiliary voxel matrix;
the matrix weighting calculation module is used for carrying out weighting calculation on the main body pixel matrix and the auxiliary voxel matrix to obtain a main voxel matrix and an auxiliary voxel matrix;
and the three-dimensional reconstruction module is used for carrying out three-dimensional reconstruction on the main voxel matrix and the auxiliary voxel matrix to obtain a three-dimensional reconstruction result.
9. A computer device, characterized in that it comprises a processor and a memory, said memory storing at least one instruction, at least one program, code set or instruction set, said at least one instruction, at least one program, code set or instruction set being loaded and executed by said processor to implement a three-dimensional reconstruction method of a multi-view two-dimensional ultrasound image according to any of claims 1 to 7.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, code set, or instruction set loaded and executed by a processor to implement the three-dimensional reconstruction method of a multi-view two-dimensional ultrasound image according to any of claims 1 to 7.
CN202310219770.4A 2023-03-08 2023-03-08 Three-dimensional reconstruction method, device, equipment and medium for multi-view two-dimensional ultrasonic image Pending CN116363303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310219770.4A CN116363303A (en) 2023-03-08 2023-03-08 Three-dimensional reconstruction method, device, equipment and medium for multi-view two-dimensional ultrasonic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310219770.4A CN116363303A (en) 2023-03-08 2023-03-08 Three-dimensional reconstruction method, device, equipment and medium for multi-view two-dimensional ultrasonic image

Publications (1)

Publication Number Publication Date
CN116363303A true CN116363303A (en) 2023-06-30

Family

ID=86927264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310219770.4A Pending CN116363303A (en) 2023-03-08 2023-03-08 Three-dimensional reconstruction method, device, equipment and medium for multi-view two-dimensional ultrasonic image

Country Status (1)

Country Link
CN (1) CN116363303A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117414154A (en) * 2023-09-05 2024-01-19 骨圣元化机器人(深圳)有限公司 Three-dimensional ultrasonic reconstruction method, device and ultrasonic system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117414154A (en) * 2023-09-05 2024-01-19 骨圣元化机器人(深圳)有限公司 Three-dimensional ultrasonic reconstruction method, device and ultrasonic system

Similar Documents

Publication Publication Date Title
CN109949899B (en) Image three-dimensional measurement method, electronic device, storage medium, and program product
AU2011206927B2 (en) A computed tomography imaging process and system
US20120013710A1 (en) System and method for geometric modeling using multiple data acquisition means
JP6469360B2 (en) Improved eye image processing
AU2018301580B2 (en) Three-dimensional ultrasound image display method
US20040208279A1 (en) Apparatus and methods for multiple view angle stereoscopic radiography
CN108113700B (en) Position calibration method applied to three-dimensional ultrasonic imaging data acquisition
JP2004237088A (en) Three-dimensional back projection method and x-ray ct apparatus
CN116363303A (en) Three-dimensional reconstruction method, device, equipment and medium for multi-view two-dimensional ultrasonic image
DE102011114333A1 (en) Method for registering C-arm X-ray device suitable for three-dimensional reconstruction of X-ray volume, involves producing X-ray projection of X-ray marks with C-arm X-ray unit
CN103593869A (en) Scanning equipment and image display method thereof
CN101825433B (en) Measuring method of offset of rotating center of rotating table of fan beam 2D-CT scanning system
JP5177606B1 (en) Three-dimensional ultrasonic image creation method and program
CN104851129B (en) A kind of 3D method for reconstructing based on multiple views
CN111833392A (en) Multi-angle scanning method, system and device for mark points
CN111681297A (en) Image reconstruction method, computer device, and storage medium
CN110930394B (en) Method and terminal equipment for measuring slope and pinnate angle of muscle fiber bundle line
CN114299096A (en) Outline delineation method, device, equipment and storage medium
US20040233193A1 (en) Method for visualising a spatially resolved data set using an illumination model
CN111166373B (en) Positioning registration method, device and system
CN113081033A (en) Three-dimensional ultrasonic imaging method based on space positioning device, storage medium and equipment
CN111184535A (en) Handheld unconstrained scanning wireless three-dimensional ultrasound real-time voxel imaging system
CN117095137B (en) Three-dimensional imaging method and system of medical image based on two-way image acquisition
CN112998693B (en) Head movement measuring method, device and equipment
CN110680371B (en) Human body internal and external structure imaging method and device based on structured light and CT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination