CN112598808B - Data processing method, device, electronic equipment and storage medium - Google Patents

Data processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112598808B
CN112598808B CN202011541156.2A CN202011541156A CN112598808B CN 112598808 B CN112598808 B CN 112598808B CN 202011541156 A CN202011541156 A CN 202011541156A CN 112598808 B CN112598808 B CN 112598808B
Authority
CN
China
Prior art keywords
volume data
dimensional volume
dimensional
data
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011541156.2A
Other languages
Chinese (zh)
Other versions
CN112598808A (en
Inventor
高毅
陈晓辉
高喜璨
杨珊灵
巨艳
宋宏萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202011541156.2A priority Critical patent/CN112598808B/en
Publication of CN112598808A publication Critical patent/CN112598808A/en
Application granted granted Critical
Publication of CN112598808B publication Critical patent/CN112598808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0825Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the breast, e.g. mammography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Abstract

The embodiment of the invention discloses a data processing method, a data processing device, electronic equipment and a storage medium. Wherein the method comprises the following steps: three-dimensional volume data of at least two directions of a target object are obtained, at least three characteristic points in the three-dimensional volume data are respectively extracted, and corresponding relations among the characteristic points in the three-dimensional volume data are determined; determining a transformation matrix between the three-dimensional volume data of each azimuth based on the corresponding relation between the characteristic points in the three-dimensional volume data; and fusing the three-dimensional volume data of each azimuth based on a transformation matrix among the three-dimensional volume data to obtain fused three-dimensional volume data of the target object. So as to achieve the effect of obtaining complete and accurate global volume data of the target object.

Description

Data processing method, device, electronic equipment and storage medium
Technical Field
Embodiments of the present invention relate to image processing technologies, and in particular, to a data processing method, apparatus, electronic device, and storage medium.
Background
Three-dimensional ultrasonic imaging systems have received attention from a wide range of researchers and healthcare workers in terms of their visual effects, and thus clinical value. And in particular full-breast automated ultrasound imaging, provides intuitive and standardized volumetric data for providing a three-dimensional breast.
The probe size of the current full-emulsion automatic ultrasonic imaging equipment is about 15cm, and before the scanning begins, the probe is transversely arranged, and the span in the left-right direction is 15cm. During the scanning process, the probe moves horizontally and upwards, and sweeps the whole breast area.
In the breast scanning mode, even if the left and right width of a single-side breast is smaller than 15cm, the fitting condition of the two side areas of the probe and the skin surface of the breast is poorer than that of the middle area, and the complete and accurate breast volume data cannot be obtained.
Disclosure of Invention
The embodiment of the invention provides a data processing method, a device, electronic equipment and a storage medium, so as to achieve the effect of obtaining complete and accurate global body data of a target object.
In a first aspect, an embodiment of the present invention provides a data processing method, including:
three-dimensional volume data of at least two directions of a target object are obtained, at least three characteristic points in the three-dimensional volume data are respectively extracted, and corresponding relations among the characteristic points in the three-dimensional volume data are determined;
determining a transformation matrix between the three-dimensional volume data of each azimuth based on the corresponding relation between the characteristic points in the three-dimensional volume data;
and fusing the three-dimensional volume data of each azimuth based on a transformation matrix among the three-dimensional volume data to obtain fused three-dimensional volume data of the target object.
In a second aspect, an embodiment of the present invention further provides a data processing apparatus, including:
the information acquisition module is used for acquiring three-dimensional volume data of at least two directions of a target object, respectively extracting at least three characteristic points in the three-dimensional volume data, and determining the corresponding relation between the characteristic points in the three-dimensional volume data;
the transformation matrix determining module is used for determining a transformation matrix between the three-dimensional volume data of each azimuth based on the corresponding relation between the characteristic points in the three-dimensional volume data;
and the data fusion module is used for fusing the three-dimensional data of all directions based on the transformation matrix among the three-dimensional data to obtain the fused three-dimensional data of the target object.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
a storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the data processing method according to any one of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions which, when executed by a computer processor, are used to perform a data processing method according to any of the embodiments of the present invention.
According to the technical scheme, the characteristic points in the three-dimensional data are extracted according to the obtained local three-dimensional data of each azimuth of the target object, the corresponding relation between the characteristic points in the three-dimensional data is determined, the transformation matrix between the three-dimensional data is determined based on the corresponding relation between the characteristic points in the three-dimensional data, the local three-dimensional data of the target object in each azimuth can be fused according to the transformation matrix between the three-dimensional data, global three-dimensional data of the target object are obtained, global three-dimensional data of the whole target object and the peripheral area of the whole target object can be directly obtained, and the tissue structures of the whole target object area and the peripheral area can be better given.
Drawings
FIG. 1 is a flow chart of a data processing method in a first embodiment of the invention;
FIG. 2 is a schematic diagram showing the acquisition of three-dimensional volume data of different orientations of a target object according to a first embodiment of the present invention;
FIG. 3 is a schematic view of acquiring images of different orientations of a target object in accordance with a first embodiment of the present invention;
FIG. 4 is an overlapping schematic view of three-dimensional volume data for each orientation in a first embodiment of the invention;
FIG. 5 is a schematic view of overlapping regions of three-dimensional volume data for two orientations in accordance with a first embodiment of the present invention;
FIG. 6 is a diagram showing the effect of fusing three-dimensional volume data in accordance with the first embodiment of the present invention;
fig. 7 is a schematic diagram showing correspondence determination between feature points in three-dimensional volume data in the second embodiment of the present invention;
FIG. 8 is a schematic diagram of a data processing apparatus according to a third embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device in a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention, where the embodiment is applicable to a case of fusing volume data of target objects in different directions to obtain global volume data of the target objects, the method may be performed by a data processing apparatus, and the data processing apparatus may be implemented by software and/or hardware, and the data processing apparatus may be configured on an electronic computing device, and specifically includes the following steps:
s110, three-dimensional volume data of at least two directions of the target object are obtained, at least three characteristic points in the three-dimensional volume data are respectively extracted, and corresponding relations among the characteristic points in the three-dimensional volume data are determined.
The target object may be an object that needs to perform fusion of the volume data of each direction to obtain global volume data. For example, it may be a human or an animal, or a tissue, an organ, or the like in a human or an animal.
The three-dimensional volume data of the at least two orientations may be three-dimensional volume data of the target object acquired from different orientations.
Specifically, referring to the schematic diagram of acquiring three-dimensional data of different orientations of the target object as shown in fig. 2, the target object is a human breast, the scanning is performed from the middle area of the breast, that is, the AP position, the scanning is performed in the middle area of the breast, that is, the MED position, and the scanning is performed in the middle side of the breast, that is, the LAT position. As can be seen from fig. 2, the three scan volumes AP, LAT and MED effectively cover the entire breast area, thus allowing three-dimensional volume data to be acquired for different orientations of the breast.
It should be noted that, three-dimensional volume data of different directions is required, and it is necessary to combine three-dimensional volume data of each direction to form global three-dimensional volume data of the whole target object, that is, three-dimensional volume data of three directions of AP, LAT and MED effectively covers the whole breast area.
If the three-dimensional volume data of the three directions AP, LAT, and MED described above cannot effectively cover the entire breast area, more scans can be performed on the regions on the side of the breast area, such as the upper side and the lower side. To ensure coverage of the entire breast area.
The image schematic diagrams of different orientations of the target object are acquired as described with reference to fig. 3, wherein the upper left corner in fig. 3 is an image of a cross section, the lower right corner is an image of a coronal plane, the lower left corner is an image of a sagittal plane, and the upper right corner is a volume rendering of three-dimensional volume data.
After three-dimensional volume data of different directions of the target object are acquired, at least three characteristic points of the three-dimensional volume data of each direction are respectively extracted.
The feature points may be volume data points that may connect three-dimensional volume data of different orientations.
With reference to the overlapping schematic diagram of the three-dimensional volume data of each azimuth described in fig. 4, after three-dimensional volume data of three azimuth of AP, LAT and MED are obtained from fig. 2, the three-dimensional volume data of three azimuth of AP, LAT and MED will overlap, and the volume data points of the overlapping area portion may be feature points.
Here, the reason why at least three feature points of three-dimensional volume data of each azimuth are acquired is that: the transformation matrix between the three-dimensional volume data of each azimuth is padded for the subsequent calculation, because the transformation matrix between the three-dimensional volume data of each azimuth can be calculated only if at least three characteristic points are arranged on each azimuth.
In the embodiment of the invention, the characteristic points in the three-dimensional data can be obtained by matching according to the characteristic information of the preset characteristic points, or can be obtained by using a neural network model or algorithm and the like. The specific manner of obtaining the feature points is described in more detail in the following examples.
After each feature point in each three-dimensional volume data is acquired, each feature point in each three-dimensional volume data can be formed into a feature point set. For example, there are three-dimensional volume data of three directions of AP, LAT and MED, 50 feature points are acquired from the three-dimensional volume data of the AP directions, and then the 50 feature points are formed into a feature point set of the AP directions; 60 feature points are acquired from three-dimensional volume data of the LAT azimuth, and then the 60 feature points form a feature point set of the LAT azimuth; 80 feature points are acquired from the three-dimensional volume data of the MED azimuth, and then the 80 feature points are formed into a feature point set of the MED azimuth.
After the feature point set of each azimuth is formed, the correspondence between the feature points in the feature point set of each azimuth can be determined. Specifically, for example, a correspondence relationship between the feature points of each azimuth may be found by which feature point of the feature point set of the AP azimuth corresponds to which feature point of the feature point set of the LAT azimuth.
Optionally, after the feature points in the three-dimensional volume data are extracted, the method may further include: identifiers are added to the feature points in the three-dimensional volume data.
The identifier may be a unique identifier added to each feature point in each three-dimensional volume data, for example, a unique number or the like may be attached to each feature point in each three-dimensional volume data.
Specifically, for example, there are three-dimensional data of two directions a and B, wherein there are 5 feature points in a and 6 feature points in B, and the 5 feature points in a are respectively numbered: 1. 2,3, 4 and 5, and the 6 feature points in the B are respectively numbered: 1', 2', 3', 4', 5', 6'. Therefore, the characteristic points can be distinguished, and the confusion in the follow-up determination of the corresponding relation between the characteristic points in A and B is avoided.
In this way, when identifiers are added to the characteristic points in the three-dimensional volume data of each azimuth, and then the corresponding relations among the characteristic points in the three-dimensional volume data are determined, confusion of the characteristic points in the three-dimensional volume data of each azimuth is not caused.
S120, determining a transformation matrix between the three-dimensional volume data of each azimuth based on the corresponding relation between the characteristic points in the three-dimensional volume data.
For example, after the correspondence between the feature points in the three-dimensional volume data of each azimuth is determined, the transformation matrix between the three-dimensional volume data of each azimuth may be determined according to the correspondence between the feature points in the three-dimensional volume data of each azimuth.
In the embodiment of the invention, after the corresponding relation between the characteristic points in the three-dimensional data of each azimuth is determined, the characteristic points corresponding to each other one by one are recorded.
Taking three-dimensional volume data of any two directions as an example, recording three-dimensional volume data of a first direction as V1, three-dimensional volume data of a second direction as V2, and recording a set of characteristic points with corresponding relation with V2 in V1 asThe set of feature points in V2 and V1 with corresponding relation is recorded as +.>j=1,2,3...n,/>Corresponding point of (2) is +.>
In the embodiment of the present invention, as shown in fig. 4, since the breast is soft tissue, the compression deformation of the breast is different when three-dimensional volume data of different orientations are acquired, and the final obtained image is also different. I.e. the phase difference between the volume data of different orientations is not a rigid body deformation, as known from the physical process of imaging.
The space transformation between the three-dimensional data of any two directions is recorded as T, and the parameter T is adopted to represent the optimal space transformation, then T is calculated by the following optimization process:
optionally, the determining the transformation matrix between the three-dimensional volume data of each azimuth based on the correspondence between each feature point in each three-dimensional volume data may specifically be: for three-dimensional volume data of any two directions of a target object, taking three-dimensional volume data of one direction as first three-dimensional volume data, taking three-dimensional volume data of the other direction as second three-dimensional volume data, and determining a transformation matrix between the three-dimensional volume data of the two directions based on the corresponding relation between each characteristic point in the three-dimensional volume data of the two directions according to the following formula:
wherein,for the feature points in the first three-dimensional volume data of the target object, which have a corresponding relationship with the feature points in the second three-dimensional volume data,/for>For the feature points in the second three-dimensional volume data of the target object, which have a corresponding relationship with the feature points in the first three-dimensional volume data,/for>And->J=1, 2, 3..n, T is a transformation matrix; t isAt minimum, the value of T.
In the embodiment of the present invention, when calculating T, the least squares, gradient descent and gaussian newtons (or quasi-newtons, newtons methods) may be collected to solve the above formulas, so as to obtain the optimal transformation matrix T.
S130, fusing the three-dimensional volume data of all directions based on the transformation matrix among the three-dimensional volume data to obtain fused three-dimensional volume data of the target object.
For example, the fused three-dimensional volume data may be global three-dimensional volume data of the target object formed by fusing three-dimensional volume data of each azimuth.
After the corresponding relation between the characteristic points in the three-dimensional volume data is obtained (namely after registration), a transformation matrix between the three-dimensional volume data can be determined, and after the transformation matrix between the three-dimensional volume data is determined, the three-dimensional volume data of all directions can be fused to obtain the fused three-dimensional volume data of the target object.
Thus, global three-dimensional data of the target object is obtained according to the obtained local three-dimensional data of each azimuth of the target object, global three-dimensional data of the whole target object and the peripheral area thereof can be directly obtained, and the organization structure of the whole target object area and the peripheral area can be better given. When the target object is the object of medical research, the doctor can better understand the spatial relationship and focus range of the target object based on the panoramic image of the target object.
Optionally, the fusing the three-dimensional data of each azimuth to obtain fused three-dimensional data of the target object may be: for the three-dimensional volume data of any two directions of the target object, executing the following steps of fusing the three-dimensional volume data of the two directions: taking the three-dimensional volume data of one azimuth as reference three-dimensional volume data, and taking the three-dimensional volume data of the other azimuth as three-dimensional volume data to be registered; based on a transformation matrix between the three-dimensional volume data of the two directions, mapping the coordinates of the three-dimensional volume data to be registered into a coordinate system of the reference three-dimensional volume data, and fusing the three-dimensional volume data of the two directions.
For three-dimensional volume data of any two directions, three-dimensional volume data of one direction can be used as reference three-dimensional volume data, three-dimensional volume data of the other direction can be used as three-dimensional volume data to be registered, and coordinates of the three-dimensional volume data to be registered are mapped into a coordinate system of the reference three-dimensional volume data based on a transformation matrix between the three-dimensional volume data of the two directions, so that fusion of the three-dimensional volume data of the two directions can be achieved, and fused three-dimensional volume data can be obtained.
In the embodiment of the present invention, for three-dimensional volume data with more than two directions, for example, for three-dimensional volume data with three directions, the method may be that after the three-dimensional volume data with two directions are fused, the three-dimensional volume data with the third direction is fused with the three-dimensional volume data with the previous three-dimensional volume data with two directions, so as to obtain the fusion result of the three-dimensional volume data with three directions.
Referring to the schematic diagram of overlapping areas of three-dimensional volume data of two directions in fig. 5, when the three-dimensional volume data of fig. 4 overlap, an overlapping area of the images corresponding to the three-dimensional volume data of two directions on the surface appears (for example, an area circled by a square frame in an image of an upper left corner, an image of a lower left corner and an image of a lower right corner in fig. 5 is an overlapping area of the three-dimensional volume data of two directions), and voxel values of the overlapping area are half of a sum of voxel values of a fixed image and voxel values of a moving image. That is, when one of the two images is set as a fixed image and the other image is set as a moving image, and the moving image is placed on the fixed image (specifically, coordinates of three-dimensional volume data of the moving image may be placed in a coordinate system of the fixed image), voxel values of the overlapping region are half of voxel values of the fixed image plus voxel values of the moving image.
Referring to the schematic diagram of the effect of fusing three-dimensional volume data described in fig. 6, after fusing three-dimensional volume data of each azimuth of the target object, global three-dimensional volume data of the target object is obtained, and as can be seen from the lower left image in fig. 6, the complete ribs are respectively located in the two three-dimensional volume data (i.e., M and N in the lower left image in fig. 6 are respectively ribs in the three-dimensional volume data of two azimuth). Through the registration and fusion of the embodiment of the invention, the ribs in the two pieces of three-dimensional volume data are accurately matched together, and the panoramic three-dimensional volume data also accurately comprises the whole rib range.
According to the technical scheme, the characteristic points in the three-dimensional data are extracted according to the obtained local three-dimensional data of each azimuth of the target object, the corresponding relation between the characteristic points in the three-dimensional data is determined, the transformation matrix between the three-dimensional data is determined based on the corresponding relation between the characteristic points in the three-dimensional data, the local three-dimensional data of the target object in each azimuth can be fused according to the transformation matrix between the three-dimensional data, global three-dimensional data of the target object are obtained, global three-dimensional data of the whole target object and the peripheral area of the whole target object can be directly obtained, and the tissue structures of the whole target object area and the peripheral area can be better given.
Example two
The embodiments of the present invention may be combined with the various alternatives in the above embodiments. In the embodiment of the invention, the extraction of the characteristic points in the three-dimensional volume data and the corresponding relation among the characteristic points in the three-dimensional volume data are specifically introduced.
At least three characteristic points in each three-dimensional volume data are extracted, and the corresponding relation between the characteristic points in each three-dimensional volume data is determined, specifically, the following two modes can be adopted:
(1) For any three-dimensional data, determining at least three characteristic points meeting preset conditions in each three-dimensional data; and determining the corresponding relation between the characteristic points in the three-dimensional volume data based on the single-shot relation between the characteristic points in any two three-dimensional volume data.
For example, for three-dimensional volume data of any azimuth, at least three volume data points satisfying a preset condition in the three-dimensional volume data can be extracted as feature points.
The preset condition may be a preset condition, for example, three-dimensional volume data of one azimuth is selected, an image corresponding to the three-dimensional volume data of the azimuth is obtained to obtain a corresponding gaussian differential image, and then the position of an extreme point in the gaussian differential image is obtained to obtain the feature point. The preset condition is a differential extreme point.
According to the mode, the characteristic points in the three-dimensional data of all directions can be extracted.
It should be noted that the above preset condition is only one possible condition, and other possible conditions may be available, which are not listed here, but those skilled in the art should clearly understand that any preset condition capable of extracting the feature points falls within the protection scope of the embodiments of the present invention.
When the feature points are extracted based on the above manner, the feature points do not correspond to each other one by one, and then the correspondence between the feature points in the three-dimensional volume data needs to be determined. The specific mode may be to determine the correspondence between the feature points in the three-dimensional volume data based on the uniradial relationship between the feature points in any two three-dimensional volume data.
Optionally, the determining the correspondence between the feature points in the three-dimensional volume data based on the uniradial relationship between the feature points in any two three-dimensional volume data may specifically be: for three-dimensional volume data of any two directions, taking three-dimensional volume data of one direction as first three-dimensional volume data and three-dimensional volume data of the other direction as second three-dimensional volume data, executing the following steps to determine the corresponding relation between each characteristic point in the three-dimensional volume data: determining a first single mapping value from each characteristic point in the first three-dimensional volume data to each characteristic point in the second three-dimensional volume data; determining a second single mapping value from each feature point in the second three-dimensional volume data to each feature point in the first three-dimensional volume data; and if the first single mapping value and the second single mapping value meet the full-shot relation, the two feature points meeting the full-shot relation are feature points with corresponding relation.
For example, for three-dimensional volume data of any two orientations, three-dimensional volume data of one orientation is taken as first three-dimensional volume data, and three-dimensional volume data of the other orientation is taken as second three-dimensional volume data.
The first single map value may be a map value of each feature point in the first three-dimensional volume data to each feature point in the second three-dimensional volume data.
The second single map value may be a map value of each feature point in the second three-dimensional volume data to each feature point in the first three-dimensional volume data.
The correspondence determination schematic diagram of each feature point in each three-dimensional volume data as shown in fig. 7, wherein a is the first three-dimensional volume data, B is the second three-dimensional volume data, 1,2,3, … …, n in a are each feature point in the first three-dimensional volume data, and 1', 2', 3', … …, n' in B are each feature point in the second three-dimensional volume data.
A first single map value of each feature point in the first three-dimensional volume data to each feature point in the second three-dimensional volume data is determined. Specifically, as shown in fig. 7, mapping values from 1 in a to 1', 2', 3', … …, and n' in B may be determined, mapping values from 2 in a to 1', 2', 3', … …, and n' in B may be determined, and so on, to determine a first single mapping value from each feature point in a to each feature point in B.
Correspondingly, a second single mapping value from each characteristic point in the second three-dimensional volume data to each characteristic point in the first three-dimensional volume data is determined. Specifically, as shown in fig. 7, mapping values from 1 'in B to 1,2,3, … … and n in a may be determined, mapping values from 2' in B to 1,2,3, … … and n in a may be determined, and so on, to determine a second single mapping value from each feature point in B to each feature point in a.
And if the first single mapping value and the second single mapping value meet the full-shot relation, determining two characteristic points with the full-shot relation as the characteristic points with the corresponding relation.
Specifically, for example, as shown in fig. 7, a first single mapping value from 1 in a to 1' in B and a second single mapping value from 1' in B to 1 in a satisfy a full-shot relationship, and then 1 in a and 1' in B have a correspondence relationship. The first single mapping value from 2 in A to 2' in B and the second single mapping value from 2' in B to 2 in A do not satisfy the full shot relation, and then 2 in A and 2' in B do not have a corresponding relation. That is, each feature point in a has a unique one-to-one correspondence with each feature point in B.
(2) Based on any two three-dimensional data, firstly, carrying out feature matching in each three-dimensional data based on at least one preset feature information, determining at least one feature point corresponding to the feature information in each three-dimensional data, and setting a corresponding relation between feature points corresponding to the same feature information in different three-dimensional data; and secondly, determining at least two characteristic points meeting preset conditions in the three-dimensional volume data, and determining the corresponding relation between the characteristic points in the three-dimensional volume data based on the single-shot relation between the characteristic points in any two three-dimensional volume data.
The feature information may be feature information of a feature point set in advance, for example.
Specifically, for example, three feature points acquired in each three-dimensional volume data are taken as an example, and three-dimensional volume data of three directions of AP, LAT, and MED are acquired as an example, with the target object in fig. 2 being a breast, and since the target object is clear (i.e., the breast), the common area of the three-dimensional volume data in the three directions is a nipple area, that is, the feature information here may be a nipple.
When the nipple area is clear, the position of the nipple area is clear, and the three-dimensional volume data of each azimuth is matched with the volume data point of the nipple area (feature information), one feature point in the three-dimensional volume data of each azimuth can be determined.
Because the nipple area is clear, in the process of determining the one characteristic point in the three-dimensional volume data of each azimuth, the corresponding relation between the one characteristic point in the three-dimensional volume data of each azimuth is correspondingly determined, and the corresponding relation between the one characteristic point corresponding to the same characteristic information in different three-dimensional volume data can be directly set.
Specifically, before three-dimensional volume data of three directions of AP, LAT and MED is acquired, a doctor marks a position as a map in the three-dimensional volume data of each direction, so that nipple positions in the three-dimensional volume data of two adjacent directions can be used as corresponding feature points.
After one feature point is obtained based on the feature information, another two feature points can be obtained based on the preset condition by using the above-mentioned mode (1), and the corresponding relationship of the two feature points can be determined based on the above-mentioned mode (1).
Thus, three characteristic points in each three-dimensional volume data and corresponding relations among the three characteristic points in each three-dimensional volume data can be found.
That is, at least one feature information set in advance, for example, nipple area information, may be used, one of the feature points is determined based on the nipple area information, and when the feature point is characterized, the correspondence of the feature point is determined. Then, based on the above-mentioned mode (1), two feature points satisfying a preset condition (for example, may be the extreme points satisfying the differential image) are determined as the remaining two feature points, and then, based on the mode (1), the correspondence between the two feature points in each three-dimensional volume data may be determined. Thus, at least three characteristic points in the three-dimensional volume data of each azimuth and the corresponding relation among the at least three characteristic points in the three-dimensional volume data of each azimuth can be determined.
In this way, each characteristic point in the three-dimensional volume data of each azimuth can be determined in the two ways, and the corresponding relation between each characteristic point in the three-dimensional volume data is determined, so that the transformation matrix between the three-dimensional volume data of each azimuth can be determined based on the corresponding relation between each characteristic point in the three-dimensional volume data, and the three-dimensional volume data of each azimuth can be fused to obtain the global three-dimensional volume data of the target object.
It should be noted that, the above two methods for determining each feature point in the three-dimensional volume data of each azimuth and determining the corresponding relationship between each feature point in each three-dimensional volume data may be selected by a user according to the requirement, and one or two modes may be selected by the user, which is not limited herein. Of course, the user can also select the two modes simultaneously.
It should be noted that, when determining at least three feature points in each three-dimensional volume data and the correspondence between at least three feature points in each three-dimensional volume data, if the two ways are selected, the results of the two ways can be mutually reflected, so that the accuracy of determining the feature points and the accuracy of the correspondence between the feature points are improved.
According to the technical scheme, through determining the characteristic points in the three-dimensional volume data of each azimuth, and determining the corresponding relation between the characteristic points in the three-dimensional volume data, the transformation matrix between the three-dimensional volume data of each azimuth is determined based on the corresponding relation between the characteristic points in the three-dimensional volume data, so that the three-dimensional volume data of each azimuth is fused, and global three-dimensional volume data of a target object is obtained.
Example III
Fig. 8 is a schematic structural diagram of a data processing apparatus according to a third embodiment of the present invention, as shown in fig. 8, where the apparatus includes: an information acquisition module 31, a transformation matrix determination module 32 and a data fusion module 33.
The information acquisition module 31 is configured to acquire three-dimensional volume data of at least two directions of a target object, extract at least three feature points in each three-dimensional volume data, and determine a correspondence between feature points in each three-dimensional volume data;
a transformation matrix determining module 32, configured to determine a transformation matrix between the three-dimensional volume data of each azimuth based on the correspondence between each feature point in each three-dimensional volume data;
and a data fusion module 33, configured to fuse the three-dimensional volume data of each azimuth based on a transformation matrix between the three-dimensional volume data, and obtain fused three-dimensional volume data of the target object.
On the basis of the technical solution of the above embodiment, the information acquisition module 31 includes:
a first correspondence determining unit, configured to perform feature matching in each three-dimensional volume data based on at least one preset feature information, and determine at least three feature points corresponding to the feature information in each three-dimensional volume data; setting corresponding relations among all feature points corresponding to the same feature information in different three-dimensional volume data;
a second correspondence determining unit, configured to determine at least two feature points in each three-dimensional volume data that satisfy a preset condition; and determining the corresponding relation between the characteristic points in the three-dimensional volume data based on the single-shot relation between the characteristic points in any two three-dimensional volume data.
On the basis of the technical solution of the foregoing embodiment, the information obtaining module 31 may further include:
a third correspondence determining unit, configured to determine, for any three-dimensional volume data, at least three feature points in each three-dimensional volume data that satisfy a preset condition; and the method is used for determining the corresponding relation between the characteristic points in the three-dimensional volume data based on the single-shot relation between the characteristic points in any two three-dimensional volume data.
On the basis of the technical solution of the foregoing embodiment, the second correspondence determining unit or the third correspondence determining unit includes:
a first single mapping value determining subunit, configured to determine, for three-dimensional volume data of any two directions, a first single mapping value from each feature point in the first three-dimensional volume data to each feature point in the second three-dimensional volume data by using three-dimensional volume data of one direction as first three-dimensional volume data and three-dimensional volume data of another direction as second three-dimensional volume data;
a second single mapping value determining subunit, configured to determine, for three-dimensional volume data of any two directions, a second single mapping value from each feature point in the second three-dimensional volume data to each feature point in the first three-dimensional volume data by using three-dimensional volume data of one direction as first three-dimensional volume data and three-dimensional volume data of another direction as second three-dimensional volume data;
and the corresponding relation determining subunit is used for determining that if the first single mapping value and the second single mapping value meet the full-shot relation, two characteristic points meeting the full-shot relation are characteristic points with corresponding relation.
Based on the technical solutions of the foregoing embodiments, the transformation matrix determining module 32 is specifically configured to:
for three-dimensional volume data of any two directions of a target object, taking three-dimensional volume data of one direction as first three-dimensional volume data, taking three-dimensional volume data of the other direction as second three-dimensional volume data, and determining a transformation matrix between the three-dimensional volume data of the two directions based on the corresponding relation between each characteristic point in the three-dimensional volume data of the two directions according to the following formula:
wherein,for the feature points in the first three-dimensional volume data of the target object, which have a corresponding relation with the feature points in the second three-dimensional volume data,/for>For the feature points in the second three-dimensional volume data of the target object, which have a corresponding relationship with the feature points in the first three-dimensional volume data,/for>And->J=1, 2, 3..n, T is a transformation matrix; t isAt minimum, the value of T.
Based on the technical solutions of the foregoing embodiments, the data fusion module 33 is specifically configured to:
for the three-dimensional volume data of any two directions of the target object, taking the three-dimensional volume data of one direction as reference three-dimensional volume data and the three-dimensional volume data of the other direction as three-dimensional volume data to be registered; and mapping coordinates of the three-dimensional volume data to be registered into a coordinate system of the reference three-dimensional volume data based on a transformation matrix between the three-dimensional volume data of the two directions, and fusing the three-dimensional volume data of the two directions.
On the basis of the technical solution of the foregoing embodiment, the information obtaining module 31 may further include:
an identifier adding unit configured to add an identifier to each feature point in each of the three-dimensional volume data.
The data processing device provided by the embodiment of the invention can execute the data processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 9 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention, and as shown in fig. 9, the electronic device includes a processor 70, a memory 71, an input device 72 and an output device 73; the number of processors 70 in the electronic device may be one or more, one processor 70 being taken as an example in fig. 9; the processor 70, the memory 71, the input means 72 and the output means 73 in the electronic device may be connected by a bus or other means, in fig. 9 by way of example.
The memory 71 is a computer-readable storage medium that can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules (e.g., the information acquisition module 31, the transformation matrix determination module 32, and the data fusion module 33) corresponding to the data processing method in the embodiment of the present invention. The processor 70 executes various functional applications of the electronic device and data processing, i.e., implements the data processing method described above, by running software programs, instructions, and modules stored in the memory 71.
The memory 71 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 71 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 71 may further include memory remotely located relative to processor 70, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 72 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. The output means 73 may comprise a display device such as a display screen.
Example five
A fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions for performing a data processing method when executed by a computer processor.
Of course, the storage medium containing computer executable instructions provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the data processing method provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., and include several instructions for causing a computer electronic device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the above embodiment of the data processing apparatus, each unit and module included are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (10)

1. A method of data processing, comprising:
three-dimensional volume data of at least two directions of a target object are obtained, at least three characteristic points in the three-dimensional volume data are respectively extracted, and corresponding relations among the characteristic points in the three-dimensional volume data are determined;
determining a transformation matrix between the three-dimensional volume data of each azimuth based on the corresponding relation between the characteristic points in the three-dimensional volume data;
based on a transformation matrix among the three-dimensional volume data, fusing the three-dimensional volume data of each azimuth to obtain fused three-dimensional volume data of the target object;
before the extracting at least three feature points in each three-dimensional volume data, the method further comprises:
for any three-dimensional volume data, determining at least three characteristic points meeting preset conditions in each three-dimensional volume data;
for any three-dimensional data, determining at least three feature points meeting preset conditions in each three-dimensional data comprises the following steps:
the feature points are three-dimensional data of the optional current azimuth, the corresponding Gaussian difference image is determined according to the image corresponding to the three-dimensional data of the azimuth, and then the position of the extreme point in the Gaussian difference image is determined.
2. The method according to claim 1, wherein extracting at least three feature points in each of the three-dimensional volume data and determining correspondence between feature points in each of the three-dimensional volume data, respectively, comprises:
performing feature matching in each three-dimensional volume data based on at least one preset feature information, and determining at least one feature point corresponding to the feature information in each three-dimensional volume data; setting corresponding relations among all feature points corresponding to the same feature information in different three-dimensional volume data;
determining at least two characteristic points meeting preset conditions in the three-dimensional volume data; and determining the corresponding relation between the characteristic points in the three-dimensional volume data based on the single-shot relation between the characteristic points in any two three-dimensional volume data.
3. The method of claim 1, wherein the extracting at least three feature points in each of the three-dimensional volume data, respectively, and determining correspondence between feature points in each of the three-dimensional volume data comprises:
and determining the corresponding relation between the characteristic points in the three-dimensional volume data based on the single-shot relation between the characteristic points in any two three-dimensional volume data.
4. A method according to claim 2 or 3, wherein determining the correspondence between the feature points in the three-dimensional volume data based on the uniradial relationship between the feature points in any two three-dimensional volume data comprises:
for three-dimensional volume data of any two directions, taking three-dimensional volume data of one direction as first three-dimensional volume data and three-dimensional volume data of the other direction as second three-dimensional volume data, executing the following steps to determine the corresponding relation between each characteristic point in the three-dimensional volume data:
determining a first single mapping value from each characteristic point in the first three-dimensional volume data to each characteristic point in the second three-dimensional volume data;
determining a second single mapping value from each feature point in the second three-dimensional volume data to each feature point in the first three-dimensional volume data;
and if the first single mapping value and the second single mapping value meet the full-shot relation, the two feature points meeting the full-shot relation are feature points with corresponding relation.
5. The method of claim 1, wherein determining a transformation matrix between the three-dimensional volume data for each azimuth based on correspondence between feature points in each of the three-dimensional volume data comprises:
for three-dimensional volume data of any two directions of a target object, taking three-dimensional volume data of one direction as first three-dimensional volume data, taking three-dimensional volume data of the other direction as second three-dimensional volume data, and determining a transformation matrix between the three-dimensional volume data of the two directions based on the corresponding relation between each characteristic point in the three-dimensional volume data of the two directions according to the following formula:
wherein,for the feature points in the first three-dimensional volume data of the target object, which have a corresponding relation with the feature points in the second three-dimensional volume data,/for>For the feature points in the second three-dimensional volume data of the target object, which have a corresponding relationship with the feature points in the first three-dimensional volume data,/for>And->J=1, 2, 3..n, T is a transformation matrix; t isAt minimum, the value of T.
6. The method of claim 1, wherein the fusing the three-dimensional volume data for each bearing based on a transformation matrix between the three-dimensional volume data comprises:
for three-dimensional volume data of any two directions of a target object, executing the following steps of fusing the three-dimensional volume data of the two directions:
taking the three-dimensional volume data of one azimuth as reference three-dimensional volume data, and taking the three-dimensional volume data of the other azimuth as three-dimensional volume data to be registered;
and mapping coordinates of the three-dimensional volume data to be registered into a coordinate system of the reference three-dimensional volume data based on a transformation matrix between the three-dimensional volume data of the two directions, and fusing the three-dimensional volume data of the two directions.
7. The method of claim 1, wherein after the extracting feature points in each of the three-dimensional volume data, the method further comprises:
an identifier is added to each feature point in each of the three-dimensional volume data.
8. A data processing apparatus, comprising:
the information acquisition module is used for acquiring three-dimensional volume data of at least two directions of a target object, respectively extracting at least three characteristic points in the three-dimensional volume data, and determining the corresponding relation between the characteristic points in the three-dimensional volume data;
the transformation matrix determining module is used for determining a transformation matrix between the three-dimensional volume data of each azimuth based on the corresponding relation between the characteristic points in the three-dimensional volume data;
the data fusion module is used for fusing the three-dimensional volume data of all directions based on a transformation matrix among the three-dimensional volume data to obtain fused three-dimensional volume data of the target object;
the information acquisition module further includes:
a feature point determining unit, configured to determine, for any three-dimensional volume data, at least three feature points in each three-dimensional volume data that satisfy a preset condition;
the feature point determining unit is specifically configured to:
the feature points are three-dimensional data of the optional current azimuth, the corresponding Gaussian difference image is determined according to the image corresponding to the three-dimensional data of the azimuth, and then the position of the extreme point in the Gaussian difference image is determined.
9. An electronic device, the electronic device comprising:
one or more processors;
a storage means for storing one or more programs;
when executed by the one or more processors, causes the one or more processors to implement the data processing method of any of claims 1-7.
10. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the data processing method of any of claims 1-7.
CN202011541156.2A 2020-12-23 2020-12-23 Data processing method, device, electronic equipment and storage medium Active CN112598808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011541156.2A CN112598808B (en) 2020-12-23 2020-12-23 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011541156.2A CN112598808B (en) 2020-12-23 2020-12-23 Data processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112598808A CN112598808A (en) 2021-04-02
CN112598808B true CN112598808B (en) 2024-04-02

Family

ID=75200471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011541156.2A Active CN112598808B (en) 2020-12-23 2020-12-23 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112598808B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1317146A2 (en) * 2001-12-03 2003-06-04 Monolith Co., Ltd. Image matching method and apparatus
JP2009020761A (en) * 2007-07-12 2009-01-29 Toshiba Corp Image processing apparatus and method thereof
EP2575106A1 (en) * 2011-09-30 2013-04-03 BrainLAB AG Method and device for displaying changes in medical image data
JP2014150855A (en) * 2013-02-06 2014-08-25 Mitsubishi Electric Corp Breast diagnosis assist system and breast data processing method
EP3276575A1 (en) * 2016-07-25 2018-01-31 Nuctech Company Limited Method, apparatus and system for reconstructing images of 3d surface
CN108573532A (en) * 2018-04-16 2018-09-25 北京市神经外科研究所 A kind of methods of exhibiting and device, computer storage media of mixed model
CN109087382A (en) * 2018-08-01 2018-12-25 宁波发睿泰科智能科技有限公司 A kind of three-dimensional reconstruction method and 3-D imaging system
CN111325779A (en) * 2020-02-07 2020-06-23 贝壳技术有限公司 Point cloud registration method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8090429B2 (en) * 2004-06-30 2012-01-03 Siemens Medical Solutions Usa, Inc. Systems and methods for localized image registration and fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1317146A2 (en) * 2001-12-03 2003-06-04 Monolith Co., Ltd. Image matching method and apparatus
JP2009020761A (en) * 2007-07-12 2009-01-29 Toshiba Corp Image processing apparatus and method thereof
EP2575106A1 (en) * 2011-09-30 2013-04-03 BrainLAB AG Method and device for displaying changes in medical image data
JP2014150855A (en) * 2013-02-06 2014-08-25 Mitsubishi Electric Corp Breast diagnosis assist system and breast data processing method
EP3276575A1 (en) * 2016-07-25 2018-01-31 Nuctech Company Limited Method, apparatus and system for reconstructing images of 3d surface
CN108573532A (en) * 2018-04-16 2018-09-25 北京市神经外科研究所 A kind of methods of exhibiting and device, computer storage media of mixed model
CN109087382A (en) * 2018-08-01 2018-12-25 宁波发睿泰科智能科技有限公司 A kind of three-dimensional reconstruction method and 3-D imaging system
CN111325779A (en) * 2020-02-07 2020-06-23 贝壳技术有限公司 Point cloud registration method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112598808A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
JP6745861B2 (en) Automatic segmentation of triplane images for real-time ultrasound imaging
CN107123137B (en) Medical image processing method and equipment
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
CN109754396B (en) Image registration method and device, computer equipment and storage medium
CN106340015B (en) A kind of localization method and device of key point
US10607420B2 (en) Methods of using an imaging apparatus in augmented reality, in medical imaging and nonmedical imaging
CN107133946B (en) Medical image processing method, device and equipment
CN105074728B (en) Chest fluoroscopic image and corresponding rib cage and vertebra 3-dimensional image Registration of Measuring Data
CN112509119B (en) Spatial data processing and positioning method and device for temporal bone and electronic equipment
US20100067768A1 (en) Method and System for Physiological Image Registration and Fusion
CN110087550B (en) Ultrasonic image display method, equipment and storage medium
CN104160424A (en) Intelligent landmark selection to improve registration accuracy in multimodal image fusion
US11954860B2 (en) Image matching method and device, and storage medium
CN111281430B (en) Ultrasonic imaging method, device and readable storage medium
KR102450931B1 (en) Image registration method and associated model training method, apparatus, apparatus
CN109740659B (en) Image matching method and device, electronic equipment and storage medium
CN107507212B (en) Digital brain visualization method and device, computing equipment and storage medium
CN106264537B (en) System and method for measuring human body posture height in image
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
WO2017038300A1 (en) Ultrasonic imaging device, and image processing device and method
CN110752029B (en) Method and device for positioning focus
CN106600619B (en) Data processing method and device
JP6689666B2 (en) Ultrasonic imaging device
CN109934798A (en) Internal object information labeling method and device, electronic equipment, storage medium
CN112598808B (en) Data processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant