CN112598808A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112598808A
CN112598808A CN202011541156.2A CN202011541156A CN112598808A CN 112598808 A CN112598808 A CN 112598808A CN 202011541156 A CN202011541156 A CN 202011541156A CN 112598808 A CN112598808 A CN 112598808A
Authority
CN
China
Prior art keywords
volume data
dimensional volume
data
dimensional
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011541156.2A
Other languages
Chinese (zh)
Other versions
CN112598808B (en
Inventor
高毅
陈晓辉
高喜璨
杨珊灵
巨艳
宋宏萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202011541156.2A priority Critical patent/CN112598808B/en
Publication of CN112598808A publication Critical patent/CN112598808A/en
Application granted granted Critical
Publication of CN112598808B publication Critical patent/CN112598808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0825Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the breast, e.g. mammography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the invention discloses a data processing method, a data processing device, electronic equipment and a storage medium. Wherein the method comprises the following steps: acquiring three-dimensional volume data of at least two directions of a target object, respectively extracting at least three characteristic points in each three-dimensional volume data, and determining the corresponding relation between the characteristic points in each three-dimensional volume data; determining a transformation matrix between the three-dimensional volume data of each direction based on the corresponding relation between the characteristic points in the three-dimensional volume data; and fusing the three-dimensional volume data of each direction based on the transformation matrix among the three-dimensional volume data to obtain fused three-dimensional volume data of the target object. So as to achieve the effect of obtaining complete and accurate global volume data of the target object.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
Embodiments of the present invention relate to image processing technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
Three-dimensional ultrasound imaging systems have received attention from a wide range of researchers and medical workers due to their visual effects, and the clinical value that they bring. And in particular full breast automated ultrasound imaging, provides intuitive and standardized volumetric data for providing a three-dimensional breast.
The size of a probe of the existing full-breast automatic ultrasonic imaging equipment is about 15cm, the probe is transversely arranged before scanning is started, and the span in the left-right direction is 15 cm. During scanning, the probe is translated upward to sweep the entire breast area.
According to the breast scanning mode, even if the left width and the right width of a breast on one side are smaller than 15cm, the bonding condition of the two side areas of the probe and the skin surface of the breast is poor compared with that of the middle area, and complete and accurate breast volume data cannot be obtained.
Disclosure of Invention
The embodiment of the invention provides a data processing method, a data processing device, electronic equipment and a storage medium, and aims to achieve the effect of obtaining complete and accurate global volume data of a target object.
In a first aspect, an embodiment of the present invention provides a data processing method, where the method includes:
acquiring three-dimensional volume data of at least two directions of a target object, respectively extracting at least three characteristic points in each three-dimensional volume data, and determining the corresponding relation between the characteristic points in each three-dimensional volume data;
determining a transformation matrix between the three-dimensional volume data of each direction based on the corresponding relation between the characteristic points in the three-dimensional volume data;
and fusing the three-dimensional volume data of each direction based on the transformation matrix among the three-dimensional volume data to obtain fused three-dimensional volume data of the target object.
In a second aspect, an embodiment of the present invention further provides a data processing apparatus, where the apparatus includes:
the information acquisition module is used for acquiring three-dimensional volume data of at least two directions of a target object, respectively extracting at least three characteristic points in each three-dimensional volume data, and determining the corresponding relation between the characteristic points in each three-dimensional volume data;
the transformation matrix determining module is used for determining a transformation matrix between the three-dimensional volume data of each direction based on the corresponding relation between the characteristic points in the three-dimensional volume data;
and the data fusion module is used for fusing the three-dimensional volume data of each direction based on the transformation matrix among the three-dimensional volume data to obtain fused three-dimensional volume data of the target object.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the data processing method according to any one of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the data processing method according to any one of the embodiments of the present invention.
According to the technical scheme of the embodiment of the invention, the characteristic points in the three-dimensional data are extracted according to the acquired local three-dimensional data of each direction of the target object, the corresponding relation among the characteristic points in the three-dimensional data is determined, the transformation matrix among the three-dimensional data is determined based on the corresponding relation among the characteristic points in the three-dimensional data, and the local three-dimensional data of the target object of each direction can be fused according to the transformation matrix among the three-dimensional data to obtain the global three-dimensional data of the target object, so that the global three-dimensional data of the whole target object and the peripheral area thereof can be directly obtained, and the organization structures of the whole target object area and the peripheral area can be better given.
Drawings
FIG. 1 is a flow chart of a data processing method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of the acquisition of three-dimensional volume data of different orientations of a target object according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of acquiring images of different orientations of a target object according to a first embodiment of the present invention;
FIG. 4 is a schematic overlay of three-dimensional volume data for each orientation according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of an overlapping region of three-dimensional volume data of two orientations according to a first embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating the effect of fusing three-dimensional volume data according to a first embodiment of the present invention;
fig. 7 is a schematic diagram illustrating determination of correspondence between feature points in each three-dimensional volume data according to a second embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a data processing apparatus according to a third embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device in a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention, where this embodiment is applicable to a case where volume data of target objects in different directions are fused to obtain global volume data of the target objects, and the method may be executed by a data processing apparatus, where the data processing apparatus may be implemented by software and/or hardware, and the data processing apparatus may be configured on an electronic computing device, and specifically includes the following steps:
s110, three-dimensional volume data of at least two directions of the target object are obtained, at least three feature points in each three-dimensional volume data are respectively extracted, and the corresponding relation between the feature points in each three-dimensional volume data is determined.
For example, the target object may be an object that needs to perform volume data fusion of each orientation to obtain global volume data. For example, it may be a human or an animal, or it may be a certain tissue, organ, etc. in a human or an animal.
The three-dimensional volume data of the at least two orientations may be three-dimensional volume data of the target object acquired from different orientations.
Specifically, referring to the schematic diagram of fig. 2 for acquiring three-dimensional volume data of a target object in different directions, in fig. 2, the target object is a human breast as an example, scanning from a middle area of the breast is an AP position, scanning from a middle offset area of the breast is an MED position, and scanning from a middle offset side of the breast is an LAT position. As can be seen from fig. 2, the three scan volumes AP, LAT and MED effectively cover the entire breast area, thus allowing three-dimensional volumetric data of different orientations of the breast to be acquired.
It should be noted that, the three-dimensional volume data of different orientations is required, and it is necessary that the three-dimensional volume data of each orientation are combined to form global three-dimensional volume data of the whole target object, that is, the three-dimensional volume data of the three orientations of AP, LAT, and MED described above effectively covers the whole breast area.
If the three-dimensional volume data of the three orientations of AP, LAT, and MED described above cannot effectively cover the entire breast area, more scans can be performed for the regions such as the lateral upper and lower regions of the breast area. To ensure coverage of the entire breast area.
Fig. 3 is a schematic diagram of acquiring images of different orientations of a target object, where in fig. 3, the upper left corner is an image of a cross section, the lower right corner is an image of a coronal plane, the lower left corner is an image of a sagittal plane, and the upper right corner is a volume rendering map of three-dimensional volume data.
After the three-dimensional volume data of the target object in different directions are acquired, at least three feature points of the three-dimensional volume data of each direction are respectively extracted.
The feature points may be volume data points that connect three-dimensional volume data of different orientations.
Referring to the overlapping schematic diagram of the three-dimensional volume data in each orientation described in fig. 4, after the three-dimensional volume data in the three orientations of AP, LAT, and MED are acquired from fig. 2, the three-dimensional volume data in the three orientations of AP, LAT, and MED overlap, and the volume data points in the overlapping region may be feature points.
The reason why at least three feature points of the three-dimensional volume data at each orientation are acquired here is that: the transformation matrix between the three-dimensional volume data of each orientation is laid down for subsequent calculation, because the transformation matrix between the three-dimensional volume data of each orientation can be calculated only if at least three feature points exist in each orientation.
In the embodiment of the present invention, the manner of acquiring the feature points in each three-dimensional volume data may be matching and acquiring according to feature information of preset feature points, or acquiring by using a neural network model or an algorithm, and the like. The specific feature point obtaining manner is described in detail in the following embodiments.
After each feature point in each three-dimensional volume data is acquired, each feature point in each three-dimensional volume data may be formed into a feature point set. For example, three-dimensional volume data of three directions of AP, LAT, and MED are available, and 50 feature points are acquired from the three-dimensional volume data of the AP direction, and then the 50 feature points form a feature point set of the AP direction; acquiring 60 characteristic points from the three-dimensional volume data of the LAT azimuth, and forming a characteristic point set of the LAT azimuth by using the 60 characteristic points; and acquiring 80 characteristic points from the three-dimensional volume data of the MED azimuth, and forming the 80 characteristic points into a characteristic point set of the MED azimuth.
After the feature point sets of the respective orientations are formed, the correspondence between the feature points in the feature point sets of the respective orientations can be determined. Specifically, for example, a certain feature point in the feature point set of the AP orientation may correspond to which feature point in the feature point set of the LAT orientation, and a correspondence relationship between the feature points of each orientation may be found.
Optionally, after the extracting the feature points in each three-dimensional volume data respectively, the method may further include: an identifier is added to each feature point in each three-dimensional volume data.
For example, the identifier may be a unique identifier added to each feature point in each three-dimensional volume data, for example, a unique number may be attached to each feature point in each three-dimensional volume data.
Specifically, for example, there are three-dimensional volume data of two orientations a and B, where there are 5 feature points in a and 6 feature points in B, and then the 5 feature points in a are numbered: 1. 2, 3, 4 and 5, numbering 6 characteristic points in B respectively: 1 ', 2', 3 ', 4', 5 ', 6'. Therefore, the characteristic points can be distinguished, and the confusion in the subsequent determination of the corresponding relation between the characteristic points in A and the characteristic points in B is avoided.
Thus, when the identifier is added to each feature point in the three-dimensional volume data of each direction and the corresponding relationship between the feature points in the three-dimensional volume data is determined, the feature points in the three-dimensional volume data of each direction are not disturbed.
And S120, determining a transformation matrix between the three-dimensional volume data of each direction based on the corresponding relation between the characteristic points in the three-dimensional volume data.
For example, after the correspondence relationship between the feature points in the three-dimensional volume data of each orientation is determined, the transformation matrix between the three-dimensional volume data of each orientation may be determined according to the correspondence relationship between the feature points in the three-dimensional volume data of each orientation.
In the embodiment of the invention, after the corresponding relation among the characteristic points in the three-dimensional volume data of each direction is determined, the characteristic points corresponding to one another are recorded.
Taking three-dimensional volume data of any two azimuths as an example, let the three-dimensional volume data of the first azimuth be V1, the three-dimensional volume data of the second azimuth be V2, and let the set of feature points in V1 that have a correspondence relationship with V2 be V2
Figure BDA0002854908410000074
Note that there is a correspondence between V2 and V1The set of feature points of the relationship is
Figure BDA0002854908410000072
j=1,2,3...n,
Figure BDA0002854908410000073
Is at the corresponding point of
Figure BDA0002854908410000075
In the embodiment of the present invention, as can be seen from fig. 4, since the breast is a soft tissue, when three-dimensional volume data in different orientations is acquired, the breast is deformed differently under pressure, and the resulting images are also different. That is, as can be seen from the physical process of imaging, the difference between the volume data of different orientations is not rigid deformation.
And recording the space transformation T between the three-dimensional data of any two orientations, and adopting a parameter T to represent the optimal space transformation, wherein T can be obtained by the following optimization process:
optionally, the determining a transformation matrix between the three-dimensional volume data of each direction based on the corresponding relationship between the feature points in each three-dimensional volume data may specifically be: regarding the three-dimensional volume data of any two directions of the target object, taking the three-dimensional volume data of one direction as first three-dimensional volume data, taking the three-dimensional volume data of the other direction as second three-dimensional volume data, and determining a transformation matrix between the three-dimensional volume data of the two directions according to the following formula based on the corresponding relation between the characteristic points in the three-dimensional volume data of the two directions:
Figure BDA0002854908410000071
wherein the content of the first and second substances,
Figure BDA0002854908410000077
is a feature point in the first three-dimensional volume data of the target object having a corresponding relationship with a feature point in the second three-dimensional volume data,
Figure BDA0002854908410000076
is a feature point in the second three-dimensional volume data of the target object having a corresponding relationship with the feature point in the first three-dimensional volume data,
Figure BDA0002854908410000082
and
Figure BDA0002854908410000083
the points in (1), (2), (3.. n) and T are transformation matrixes; t is
Figure BDA0002854908410000081
Minimum, value of T.
In the embodiment of the invention, when calculating T, least squares, gradient descent and gauss-newton (or quasi-newton, newton method) can be collected to solve the formula, and an optimal transformation matrix T can be obtained.
And S130, fusing the three-dimensional volume data of each direction based on the transformation matrix among the three-dimensional volume data to obtain fused three-dimensional volume data of the target object.
For example, the fused three-dimensional volume data may be global three-dimensional volume data of the target object formed by fusing three-dimensional volume data of each orientation.
After the correspondence between the feature points in each three-dimensional volume data is obtained (i.e., after registration), a transformation matrix between each three-dimensional volume data can be determined, and after the transformation matrix between each three-dimensional volume data is determined, the three-dimensional volume data in each orientation can be fused to obtain fused three-dimensional volume data of the target object.
Therefore, the global three-dimensional volume data of the target object is obtained according to the local three-dimensional volume data of each direction of the target object, so that the global three-dimensional volume data of the whole target object and the peripheral area thereof can be directly obtained, and the organization structures of the whole target object area and the peripheral area can be better given. When the target object is the object of medical research, the doctor can better understand the spatial relationship and the focus range of the target object based on the panoramic image of the target object.
Optionally, the fusing the three-dimensional volume data of each direction to obtain fused three-dimensional volume data of the target object may be: for the three-dimensional data of any two directions of the target object, the following steps are executed to fuse the three-dimensional data of the two directions: taking the three-dimensional volume data of one position as reference three-dimensional volume data, and taking the three-dimensional volume data of the other position as three-dimensional volume data to be registered; and mapping the coordinates of the three-dimensional data to be registered to a coordinate system of the reference three-dimensional data based on a transformation matrix between the three-dimensional data of the two directions, and fusing the three-dimensional data of the two directions.
For any two directions of three-dimensional volume data, the three-dimensional volume data of one direction can be taken as reference three-dimensional volume data, the three-dimensional volume data of the other direction can be taken as three-dimensional volume data to be registered, and the coordinates of the three-dimensional volume data to be registered are mapped into the coordinate system of the reference three-dimensional volume data based on the transformation matrix between the two directions of three-dimensional volume data, so that the two directions of three-dimensional volume data can be fused, and the fused three-dimensional volume data can be obtained, namely the fused three-dimensional volume data.
In the embodiment of the present invention, for the three-dimensional volume data of two or more orientations, for example, for the three-dimensional volume data of three orientations, the three-dimensional volume data of two orientations are fused first according to the above method, and then the three-dimensional volume data of the third orientation is fused with the three-dimensional volume data obtained by fusing the three-dimensional volume data of the previous two orientations, so as to obtain the fusion result of the three-dimensional volume data of three orientations.
Referring to the schematic diagram of the overlapping region of the two-orientation three-dimensional volume data shown in fig. 5, when the three-dimensional volume data in fig. 4 overlaps, the images corresponding to the two-orientation three-dimensional volume data on the surface may overlap (for example, the area circled by the square frame in the upper-left image, the lower-left image, and the lower-right image in fig. 5 is the overlapping region of the two-orientation three-dimensional volume data), and the voxel value of the overlapping region is the fixed image voxel value plus half of the moving image voxel value sum. That is, for two images, one image is taken as a fixed image and the other image is taken as a moving image, and when the moving image is placed on the fixed image (specifically, the coordinates of the three-dimensional volume data of the moving image are placed in the coordinate system of the fixed image), the voxel value of the overlapping region is half of the sum of the fixed image voxel value and the moving image voxel value.
Referring to the effect schematic diagram of the fused three-dimensional volume data described in fig. 6, after the three-dimensional volume data of each azimuth of the target object is fused, the global three-dimensional volume data of the target object is obtained, and as can be seen from the image at the lower left corner of fig. 6, the complete rib is respectively located in the two three-dimensional volume data (i.e., M and N in the image at the lower left corner of fig. 6 are respectively ribs in the three-dimensional volume data of two azimuths). Through the registration and fusion of the embodiment of the invention, the ribs in the two three-dimensional volume data are accurately matched together, and the panoramic three-dimensional volume data also correctly contains the whole rib range.
According to the technical scheme of the embodiment of the invention, the characteristic points in the three-dimensional data are extracted according to the acquired local three-dimensional data of each direction of the target object, the corresponding relation among the characteristic points in the three-dimensional data is determined, the transformation matrix among the three-dimensional data is determined based on the corresponding relation among the characteristic points in the three-dimensional data, and the local three-dimensional data of the target object of each direction can be fused according to the transformation matrix among the three-dimensional data to obtain the global three-dimensional data of the target object, so that the global three-dimensional data of the whole target object and the peripheral area thereof can be directly obtained, and the organization structures of the whole target object area and the peripheral area can be better given.
Example two
Embodiments of the present invention may be combined with various alternatives of the above embodiments. In the embodiment of the present invention, the extraction of feature points in each three-dimensional volume data and the correspondence between feature points in each three-dimensional volume data are specifically described.
At least three feature points in each three-dimensional volume data are extracted, and the corresponding relationship between the feature points in each three-dimensional volume data is determined, specifically, the following two ways can be adopted:
(1) for any three-dimensional volume data, determining at least three characteristic points which meet preset conditions in each three-dimensional volume data; and determining the corresponding relation between the characteristic points in the three-dimensional volume data based on the single shot relation between the characteristic points in any two three-dimensional volume data.
For example, for three-dimensional volume data of any orientation, at least three volume data points in the three-dimensional volume data that satisfy a preset condition may be extracted as feature points.
The preset condition may be a preset condition, for example, a direction of the three-dimensional volume data is selected, a gaussian difference image corresponding to the direction of the three-dimensional volume data is obtained from an image corresponding to the direction of the three-dimensional volume data, and then the position of the extreme point in the gaussian difference image is obtained, i.e., the feature point. The preset condition is the difference extreme point.
The feature points in the three-dimensional volume data of each orientation can be extracted according to the above-described method.
It should be noted that the preset condition is only one possible condition, and there may be other possible conditions, which are not listed here, but it should be clear to those skilled in the art that any preset condition that can extract the feature point belongs to the protection scope of the embodiment of the present invention.
After the feature points are extracted based on the above method, the feature points do not correspond to each other one by one, and then the corresponding relationship between the feature points in each three-dimensional volume data needs to be determined. The specific method may be to determine the corresponding relationship between the feature points in each three-dimensional volume data based on the single-shot relationship between the feature points in any two three-dimensional volume data.
Optionally, the determining, based on the single-shot relationship between the feature points in any two three-dimensional volume data, the corresponding relationship between the feature points in each three-dimensional volume data may specifically be: for any two directions of three-dimensional volume data, taking the three-dimensional volume data of one direction as first three-dimensional volume data and taking the three-dimensional volume data of the other direction as second three-dimensional volume data, and executing the following steps to determine the corresponding relation between the characteristic points in the three-dimensional volume data: determining a first single mapping value from each characteristic point in the first three-dimensional volume data to each characteristic point in the second three-dimensional volume data; determining a second single mapping value from each feature point in the second three-dimensional volume data to each feature point in the first three-dimensional volume data; and if the first single mapping value and the second single mapping value meet the full-scale relation, two feature points meeting the full-scale relation are feature points with a corresponding relation.
For example, for any two orientations of three-dimensional volume data, the three-dimensional volume data of one orientation is taken as the first three-dimensional volume data, and the three-dimensional volume data of the other orientation is taken as the second three-dimensional volume data.
The first single mapping value may be a mapping value of each feature point in the first three-dimensional volume data to each feature point in the second three-dimensional volume data.
The second single mapping value may be a mapping value of each feature point in the second three-dimensional volume data to each feature point in the first three-dimensional volume data.
Fig. 7 is a schematic diagram for determining the correspondence between the feature points in the three-dimensional volume data, where a is the first three-dimensional volume data, B is the second three-dimensional volume data, 1, 2, 3, … …, n in a is the feature points in the first three-dimensional volume data, and 1 ', 2', 3 ', … …, n' in B is the feature points in the second three-dimensional volume data.
A first single mapping value of each feature point in the first three-dimensional volume data to each feature point in the second three-dimensional volume data is determined. Specifically, as shown in fig. 7, mapping values from 1 in a to 1 ', 2', 3 ', … …, and n' in B may be determined, mapping values from 2 in a to 1 ', 2', 3 ', … …, and n' in B may be determined, and so on, to determine a first single mapping value from each feature point in a to each feature point in B.
Correspondingly, a second single mapping value from each feature point in the second three-dimensional volume data to each feature point in the first three-dimensional volume data is determined. Specifically, as shown in fig. 7, mapping values of 1 'in B to 1, 2, 3, … …, and n in a, mapping values of 2' in B to 1, 2, 3, … …, and n in a, and so on may be determined, and a second single mapping value of each feature point in B to each feature point in a is determined.
And if the first single mapping value and the second single mapping value meet the full-reflection relationship, determining two feature points with the full-reflection relationship as feature points with a corresponding relationship.
Specifically, for example, as shown in fig. 7, if a first single mapping value from 1 in a to 1 ' in B and a second single mapping value from 1 ' in B to 1 in a satisfy a fill-shoot relationship, 1 in a and 1 ' in B have a corresponding relationship. If the first single mapping value from 2 in A to 2 ' in B and the second single mapping value from 2 ' in B to 2 in A do not satisfy the full-fire relationship, 2 in A and 2 ' in B do not have the corresponding relationship. That is, each feature point in a has a unique one-to-one correspondence with each feature point in B.
(2) Based on any two three-dimensional volume data, firstly, performing feature matching in each three-dimensional volume data based on at least one preset feature information, determining at least one feature point corresponding to the feature information in each three-dimensional volume data, and setting a corresponding relation between the feature points corresponding to the same feature information in different three-dimensional volume data; and secondly, determining at least two characteristic points which meet preset conditions in each three-dimensional volume data, and determining the corresponding relation between the characteristic points in each three-dimensional volume data based on the single shot relation between the characteristic points in any two three-dimensional volume data.
Illustratively, the feature information may be feature information of a feature point set in advance.
Specifically, for example, three feature points acquired in each three-dimensional volume data are taken as an example, and the target object in fig. 2 is taken as a breast, three-dimensional volume data in three orientations of AP, LAT, and MED are acquired, and since the target object is clear (that is, a breast), a common area of the three-dimensional volume data in the three orientations is a nipple area, that is, the feature information here may be a nipple.
When the nipple area is clear, the position of the nipple area is also clear, and one feature point in the three-dimensional volume data of each position can be determined by matching the three-dimensional volume data of each position with the volume data point of the nipple area (feature information).
Since the nipple area is clear, in the process of determining the one feature point in the three-dimensional volume data of each position, the corresponding relationship between the one feature point in the three-dimensional volume data of each position is correspondingly determined, that is, the corresponding relationship between the one feature point corresponding to the same feature information in different three-dimensional volume data can be directly set.
Specifically, before acquiring three-dimensional volume data of three positions of AP, LAT, and MED, a doctor marks a position as a map in the three-dimensional volume data of each position, so that nipple positions in the three-dimensional volume data of two adjacent positions can be used as a pair of corresponding feature points.
After one feature point is acquired based on the feature information, another two feature points can be acquired based on the preset condition by using the above method (1), and the corresponding relationship between the two feature points can be determined based on the above method (1).
Thus, the three feature points in each three-dimensional volume data and the corresponding relationship between the three feature points in each three-dimensional volume data can be found.
That is, one of the feature points may be determined based on nipple area information based on at least one preset feature information, for example, nipple area information, and when the feature point is characterized, the corresponding relationship of the feature point may be determined. Then, two feature points that satisfy a preset condition (for example, that satisfy a difference image extreme point) are determined as the remaining two feature points based on the method (1) described above, and then the correspondence between the two feature points in each three-dimensional volume data is determined based on the method (1). In this way, at least three feature points in the three-dimensional volume data of each orientation and the corresponding relationship between at least three feature points in the three-dimensional volume data of each orientation can be determined.
In this way, the feature points in the three-dimensional volume data of each direction can be determined through the two methods, and the corresponding relationship between the feature points in the three-dimensional volume data is determined, so that the transformation matrix between the three-dimensional volume data of each direction can be determined based on the corresponding relationship between the feature points in the three-dimensional volume data, so that the three-dimensional volume data of each direction can be fused to obtain the global three-dimensional volume data of the target object.
It should be noted that, in the above two methods for determining each feature point in the three-dimensional volume data of each orientation and determining the corresponding relationship between the feature points in each three-dimensional volume data, a user can select one or two of the manners according to the needs, which is not limited herein. Of course, the user can select both of these modes at the same time.
It should be noted that, when determining at least three feature points in each three-dimensional volume data and the corresponding relationship between at least three feature points in each three-dimensional volume data, if the two manners are selected, the results of the two manners can be mutually verified, thereby improving the accuracy of determining the feature points and the accuracy of the corresponding relationship between the feature points.
According to the technical scheme of the embodiment of the invention, through the method for determining the characteristic points in the three-dimensional volume data of all directions and determining the corresponding relation among the characteristic points in the three-dimensional volume data, the transformation matrix among the three-dimensional volume data of all directions is determined based on the corresponding relation among the characteristic points in the three-dimensional volume data, so that the three-dimensional volume data of all directions are fused to obtain the global three-dimensional volume data of the target object.
EXAMPLE III
Fig. 8 is a schematic structural diagram of a data processing apparatus according to a third embodiment of the present invention, as shown in fig. 8, the apparatus includes: an information acquisition module 31, a transformation matrix determination module 32, and a data fusion module 33.
The information acquiring module 31 is configured to acquire three-dimensional volume data of at least two orientations of a target object, extract at least three feature points in each three-dimensional volume data, and determine a corresponding relationship between the feature points in each three-dimensional volume data;
a transformation matrix determination module 32, configured to determine a transformation matrix between three-dimensional volume data in each orientation based on a correspondence between feature points in each of the three-dimensional volume data;
and a data fusion module 33, configured to fuse the three-dimensional volume data in each direction based on a transformation matrix between the three-dimensional volume data to obtain fused three-dimensional volume data of the target object.
On the basis of the technical solution of the above embodiment, the information obtaining module 31 includes:
a first corresponding relation determining unit, configured to perform feature matching in each three-dimensional volume data based on at least one preset feature information, and determine at least three feature points corresponding to the feature information in each three-dimensional volume data; setting corresponding relations among all characteristic points corresponding to the same characteristic information in different three-dimensional volume data;
the second corresponding relation determining unit is used for determining at least two characteristic points which meet preset conditions in the three-dimensional volume data; and determining the corresponding relation between the characteristic points in the three-dimensional volume data based on the single shot relation between the characteristic points in any two three-dimensional volume data.
On the basis of the technical solution of the above embodiment, the information obtaining module 31 may further include:
the third corresponding relation determining unit is used for determining at least three characteristic points which meet preset conditions in each three-dimensional volume data for any three-dimensional volume data; the method is used for determining the corresponding relation between the characteristic points in any two three-dimensional volume data based on the single-shot relation between the characteristic points in the three-dimensional volume data.
On the basis of the technical solution of the above embodiment, the second correspondence determining unit or the third correspondence determining unit includes:
a first single mapping value determining subunit, configured to determine, for three-dimensional volume data in any two orientations, a first single mapping value from each feature point in the first three-dimensional volume data to each feature point in the second three-dimensional volume data, using the three-dimensional volume data in one orientation as the first three-dimensional volume data, and using the three-dimensional volume data in the other orientation as the second three-dimensional volume data;
a second single mapping value determining subunit, configured to determine, for any two orientations of three-dimensional volume data, a second single mapping value from each feature point in the second three-dimensional volume data to each feature point in the first three-dimensional volume data, using the three-dimensional volume data of one orientation as first three-dimensional volume data, and using the three-dimensional volume data of the other orientation as second three-dimensional volume data;
and the corresponding relation determining subunit is configured to determine, if the first single mapping value and the second single mapping value satisfy the congruent relationship, that the two feature points that satisfy the congruent relationship are feature points having a corresponding relation.
On the basis of the technical solution of the foregoing embodiment, the transformation matrix determining module 32 is specifically configured to:
regarding the three-dimensional volume data of any two directions of the target object, taking the three-dimensional volume data of one direction as first three-dimensional volume data, taking the three-dimensional volume data of the other direction as second three-dimensional volume data, and determining a transformation matrix between the three-dimensional volume data of the two directions according to the following formula based on the corresponding relation between the characteristic points in the three-dimensional volume data of the two directions:
Figure BDA0002854908410000171
wherein the content of the first and second substances,
Figure BDA0002854908410000172
the characteristic points in the first three-dimensional volume data of the target object have corresponding relations with the characteristic points in the second three-dimensional volume data,
Figure BDA0002854908410000173
the characteristic points in the second three-dimensional volume data of the target object have corresponding relations with the characteristic points in the first three-dimensional volume data,
Figure BDA0002854908410000174
and
Figure BDA0002854908410000175
the points in (1), (2), (3.. n) and T are transformation matrixes; t is
Figure BDA0002854908410000176
Minimum, value of T.
On the basis of the technical solution of the above embodiment, the data fusion module 33 is specifically configured to:
regarding the three-dimensional volume data of any two directions of the target object, taking the three-dimensional volume data of one direction as reference three-dimensional volume data, and taking the three-dimensional volume data of the other direction as three-dimensional volume data to be registered; and mapping the coordinates of the three-dimensional data to be registered to a coordinate system of the reference three-dimensional data based on a transformation matrix between the three-dimensional data of the two directions, and fusing the three-dimensional data of the two directions.
On the basis of the technical solution of the above embodiment, the information obtaining module 31 may further include:
and an identifier adding unit configured to add an identifier to each feature point in each of the three-dimensional volume data.
The data processing device provided by the embodiment of the invention can execute the data processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 9 is a schematic structural diagram of an electronic apparatus according to a fourth embodiment of the present invention, as shown in fig. 9, the electronic apparatus includes a processor 70, a memory 71, an input device 72, and an output device 73; the number of the processors 70 in the electronic device may be one or more, and one processor 70 is taken as an example in fig. 9; the processor 70, the memory 71, the input device 72 and the output device 73 in the electronic apparatus may be connected by a bus or other means, and the bus connection is exemplified in fig. 9.
The memory 71, as a computer-readable storage medium, may be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules (e.g., the information acquisition module 31, the transformation matrix determination module 32, and the data fusion module 33) corresponding to the data processing method in the embodiment of the present invention. The processor 70 executes various functional applications and data processing of the electronic device by executing software programs, instructions and modules stored in the memory 71, that is, implements the data processing method described above.
The memory 71 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 71 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 71 may further include memory located remotely from the processor 70, which may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 72 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic apparatus. The output device 73 may include a display device such as a display screen.
EXAMPLE five
Fifth, an embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a data processing method.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in the data processing method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes instructions for enabling a computer electronic device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the data processing apparatus, the included units and modules are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A data processing method, comprising:
acquiring three-dimensional volume data of at least two directions of a target object, respectively extracting at least three characteristic points in each three-dimensional volume data, and determining the corresponding relation between the characteristic points in each three-dimensional volume data;
determining a transformation matrix between the three-dimensional volume data of each direction based on the corresponding relation between the characteristic points in the three-dimensional volume data;
and fusing the three-dimensional volume data of each direction based on the transformation matrix among the three-dimensional volume data to obtain fused three-dimensional volume data of the target object.
2. The method according to claim 1, wherein the extracting at least three feature points in each of the three-dimensional volume data and determining a correspondence between the feature points in each of the three-dimensional volume data respectively comprises:
performing feature matching in each three-dimensional volume data based on at least one preset feature information, and determining at least one feature point corresponding to the feature information in each three-dimensional volume data; setting corresponding relations among all characteristic points corresponding to the same characteristic information in different three-dimensional volume data;
determining at least two characteristic points which meet preset conditions in each three-dimensional volume data; and determining the corresponding relation between the characteristic points in the three-dimensional volume data based on the single shot relation between the characteristic points in any two three-dimensional volume data.
3. The method according to claim 1, wherein the extracting at least three feature points in each of the three-dimensional volume data respectively and the determining the correspondence between the feature points in each of the three-dimensional volume data comprises:
for any three-dimensional volume data, determining at least three characteristic points which meet preset conditions in each three-dimensional volume data;
and determining the corresponding relation between the characteristic points in the three-dimensional volume data based on the single shot relation between the characteristic points in any two three-dimensional volume data.
4. The method according to claim 2 or 3, wherein the determining the corresponding relationship between the feature points in any two three-dimensional volume data based on the single shot relationship between the feature points in the three-dimensional volume data comprises:
for any two directions of three-dimensional volume data, taking the three-dimensional volume data of one direction as first three-dimensional volume data and taking the three-dimensional volume data of the other direction as second three-dimensional volume data, and executing the following steps to determine the corresponding relation between the characteristic points in the three-dimensional volume data:
determining a first single mapping value from each characteristic point in the first three-dimensional volume data to each characteristic point in the second three-dimensional volume data;
determining a second single mapping value from each feature point in the second three-dimensional volume data to each feature point in the first three-dimensional volume data;
and if the first single mapping value and the second single mapping value meet the full-scale relation, two feature points meeting the full-scale relation are feature points with a corresponding relation.
5. The method according to claim 1, wherein determining a transformation matrix between three-dimensional volume data of each orientation based on a correspondence between feature points in each of the three-dimensional volume data comprises:
regarding the three-dimensional volume data of any two directions of the target object, taking the three-dimensional volume data of one direction as first three-dimensional volume data, taking the three-dimensional volume data of the other direction as second three-dimensional volume data, and determining a transformation matrix between the three-dimensional volume data of the two directions according to the following formula based on the corresponding relation between the characteristic points in the three-dimensional volume data of the two directions:
Figure FDA0002854908400000021
wherein the content of the first and second substances,
Figure FDA0002854908400000022
the characteristic points in the first three-dimensional volume data of the target object have corresponding relations with the characteristic points in the second three-dimensional volume data,
Figure FDA0002854908400000031
the characteristic points in the second three-dimensional volume data of the target object have corresponding relations with the characteristic points in the first three-dimensional volume data,
Figure FDA0002854908400000032
and
Figure FDA0002854908400000033
the points in (1), (2), (3.. n) and T are transformation matrixes; t is
Figure FDA0002854908400000034
Minimum, value of T.
6. The method of claim 1, wherein the fusing the three-dimensional volume data for each orientation based on a transformation matrix between each of the three-dimensional volume data comprises:
for the three-dimensional data of any two positions of the target object, executing the following steps of fusing the three-dimensional data of the two positions:
taking the three-dimensional volume data of one position as reference three-dimensional volume data, and taking the three-dimensional volume data of the other position as three-dimensional volume data to be registered;
and mapping the coordinates of the three-dimensional data to be registered to a coordinate system of the reference three-dimensional data based on a transformation matrix between the three-dimensional data of the two directions, and fusing the three-dimensional data of the two directions.
7. The method according to claim 1, wherein after said extracting feature points in each of said three-dimensional volume data, respectively, said method further comprises:
an identifier is added to each feature point in each of the three-dimensional volume data.
8. A data processing apparatus, comprising:
the information acquisition module is used for acquiring three-dimensional volume data of at least two directions of a target object, respectively extracting at least three characteristic points in each three-dimensional volume data, and determining the corresponding relation between the characteristic points in each three-dimensional volume data;
the transformation matrix determining module is used for determining a transformation matrix between the three-dimensional volume data of each direction based on the corresponding relation between the characteristic points in the three-dimensional volume data;
and the data fusion module is used for fusing the three-dimensional volume data of each direction based on the transformation matrix among the three-dimensional volume data to obtain fused three-dimensional volume data of the target object.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a data processing method as claimed in any one of claims 1-7.
10. A storage medium containing computer-executable instructions for performing the data processing method of any one of claims 1 to 7 when executed by a computer processor.
CN202011541156.2A 2020-12-23 2020-12-23 Data processing method, device, electronic equipment and storage medium Active CN112598808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011541156.2A CN112598808B (en) 2020-12-23 2020-12-23 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011541156.2A CN112598808B (en) 2020-12-23 2020-12-23 Data processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112598808A true CN112598808A (en) 2021-04-02
CN112598808B CN112598808B (en) 2024-04-02

Family

ID=75200471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011541156.2A Active CN112598808B (en) 2020-12-23 2020-12-23 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112598808B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1317146A2 (en) * 2001-12-03 2003-06-04 Monolith Co., Ltd. Image matching method and apparatus
US20060004275A1 (en) * 2004-06-30 2006-01-05 Vija A H Systems and methods for localized image registration and fusion
JP2009020761A (en) * 2007-07-12 2009-01-29 Toshiba Corp Image processing apparatus and method thereof
EP2575106A1 (en) * 2011-09-30 2013-04-03 BrainLAB AG Method and device for displaying changes in medical image data
JP2014150855A (en) * 2013-02-06 2014-08-25 Mitsubishi Electric Corp Breast diagnosis assist system and breast data processing method
EP3276575A1 (en) * 2016-07-25 2018-01-31 Nuctech Company Limited Method, apparatus and system for reconstructing images of 3d surface
CN108573532A (en) * 2018-04-16 2018-09-25 北京市神经外科研究所 A kind of methods of exhibiting and device, computer storage media of mixed model
CN109087382A (en) * 2018-08-01 2018-12-25 宁波发睿泰科智能科技有限公司 A kind of three-dimensional reconstruction method and 3-D imaging system
CN111325779A (en) * 2020-02-07 2020-06-23 贝壳技术有限公司 Point cloud registration method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1317146A2 (en) * 2001-12-03 2003-06-04 Monolith Co., Ltd. Image matching method and apparatus
US20060004275A1 (en) * 2004-06-30 2006-01-05 Vija A H Systems and methods for localized image registration and fusion
JP2009020761A (en) * 2007-07-12 2009-01-29 Toshiba Corp Image processing apparatus and method thereof
EP2575106A1 (en) * 2011-09-30 2013-04-03 BrainLAB AG Method and device for displaying changes in medical image data
JP2014150855A (en) * 2013-02-06 2014-08-25 Mitsubishi Electric Corp Breast diagnosis assist system and breast data processing method
EP3276575A1 (en) * 2016-07-25 2018-01-31 Nuctech Company Limited Method, apparatus and system for reconstructing images of 3d surface
CN108573532A (en) * 2018-04-16 2018-09-25 北京市神经外科研究所 A kind of methods of exhibiting and device, computer storage media of mixed model
CN109087382A (en) * 2018-08-01 2018-12-25 宁波发睿泰科智能科技有限公司 A kind of three-dimensional reconstruction method and 3-D imaging system
CN111325779A (en) * 2020-02-07 2020-06-23 贝壳技术有限公司 Point cloud registration method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112598808B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
JP6745861B2 (en) Automatic segmentation of triplane images for real-time ultrasound imaging
CN108520519B (en) Image processing method and device and computer readable storage medium
CN107123137B (en) Medical image processing method and equipment
CN109754396B (en) Image registration method and device, computer equipment and storage medium
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
US10607420B2 (en) Methods of using an imaging apparatus in augmented reality, in medical imaging and nonmedical imaging
US11083436B2 (en) Ultrasonic image analysis systems and analysis methods thereof
CN110087550B (en) Ultrasonic image display method, equipment and storage medium
CN109064549B (en) Method for generating mark point detection model and method for detecting mark point
CN112509119B (en) Spatial data processing and positioning method and device for temporal bone and electronic equipment
CN111583188A (en) Operation navigation mark point positioning method, storage medium and computer equipment
US11954860B2 (en) Image matching method and device, and storage medium
CN107106128B (en) Ultrasound imaging apparatus and method for segmenting an anatomical target
CN110706791B (en) Medical image processing method and device
CN107111875A (en) Feedback for multi-modal autoregistration
CN105303550A (en) Image processing apparatus and image processing method
CN108805933B (en) Method for determining target point and positioning system of mammary gland X-ray photographic system
CN107835661A (en) Ultrasonoscopy processing system and method and its device, supersonic diagnostic appts
CN109934798A (en) Internal object information labeling method and device, electronic equipment, storage medium
CN112598808A (en) Data processing method and device, electronic equipment and storage medium
US20210383564A1 (en) Ultrasound image acquisition method, system and computer storage medium
JP6944492B2 (en) Image acquisition method, related equipment and readable storage medium
CN113658106A (en) Liver focus automatic diagnosis system based on abdomen enhanced CT
CN113506313A (en) Image processing method and related device, electronic equipment and storage medium
US20150213591A1 (en) Dynamic local registration system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant