WO2021238171A1 - 图像配准方法及其相关的模型训练方法、设备、装置 - Google Patents
图像配准方法及其相关的模型训练方法、设备、装置 Download PDFInfo
- Publication number
- WO2021238171A1 WO2021238171A1 PCT/CN2020/136254 CN2020136254W WO2021238171A1 WO 2021238171 A1 WO2021238171 A1 WO 2021238171A1 CN 2020136254 W CN2020136254 W CN 2020136254W WO 2021238171 A1 WO2021238171 A1 WO 2021238171A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature
- dimensional
- virtual
- projection
- Prior art date
Links
- 238000012549 training Methods 0.000 title claims abstract description 206
- 238000000034 method Methods 0.000 title claims abstract description 136
- 238000000605 extraction Methods 0.000 claims abstract description 306
- 238000003384 imaging method Methods 0.000 claims description 65
- 230000009466 transformation Effects 0.000 claims description 41
- 230000008569 process Effects 0.000 claims description 35
- 238000004590 computer program Methods 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 description 96
- 230000000694 effects Effects 0.000 description 33
- 238000010586 diagram Methods 0.000 description 18
- 238000013528 artificial neural network Methods 0.000 description 9
- 210000000988 bone and bone Anatomy 0.000 description 9
- 230000036544 posture Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Definitions
- the present disclosure relates to the field of image processing technology, and in particular to an image registration method and related model training methods, equipment, and devices.
- Image registration is the process of matching two or more images acquired at different times, different sensors (imaging equipment) or under different conditions (camera position and angle, etc.).
- Medical image registration refers to seeking a (or series) of spatial transformation for a medical image to make it spatially consistent with the corresponding points on another medical image.
- neural networks to register images has shown great potential and wide application prospects.
- the neural network model samples trained for registration are real images that have been manually registered.
- the neural network model that uses real images to train The application is subject to certain restrictions.
- the embodiments of the present disclosure provide an image registration method and related model training methods, equipment, and devices.
- the first aspect of the embodiments of the present disclosure provides a method for training an image registration model.
- the method includes: acquiring a real two-dimensional image and a reference two-dimensional image, where the real two-dimensional image is obtained by imaging a real target using an imaging device, and the location of the real target in the reference two-dimensional image matches the real two-dimensional image; Use the virtual image feature extraction network of the image registration model to perform feature extraction on the reference two-dimensional image to obtain the first virtual feature map; wherein the image registration model has been pre-trained with the virtual image, and the virtual image feature extraction network participates in the pre-training , The virtual image is generated based on the virtual target; the real image feature extraction network of the image registration model is used to extract the features of the real two-dimensional image to obtain the first real feature map; where the real image feature extraction network does not participate in pre-training; use The difference between the first real feature map and the first virtual feature map adjusts the network parameters of the real image feature extraction network.
- the virtual image feature extraction network is used to adjust the network parameters of the real image feature extraction network to realize the training migration of real image data to obtain the final image registration model. Since the image registration model uses virtual image data for pre-training in the early stage, It can reduce the real sample image data required during training, that is, reduce the training cost, and in the later stage, the real image data and the pre-trained image registration model can be used to train the real image feature extraction network, which is to realize the use of virtual image data. As a result, the real data training is supervised, and the training effect of the image registration model is improved, so that the real image feature extraction network can be used for subsequent training, so that the image registration model can be more easily applied in the real environment.
- the aforementioned obtaining of the reference two-dimensional image includes: using the actual registration result between the real two-dimensional image and the real three-dimensional image to generate a reference two-dimensional image whose position of the real target is consistent with the real two-dimensional image. Therefore, by generating a reference two-dimensional image whose position of the real target is consistent with the real two-dimensional image, the reference two-dimensional image and the real two-dimensional image can be used in subsequent training.
- the above-mentioned image registration model also includes a projection image feature extraction network and a position prediction network participating in pre-training; the above-mentioned use of the difference between the first real feature map and the first virtual feature map to adjust the real image feature extraction network
- the method further includes: using the adjusted real image feature extraction network to perform feature extraction on the real two-dimensional image to obtain a second real feature map; using the first projection model parameters of the real two-dimensional image to perform feature extraction on the real three-dimensional image Projection to obtain the first projected image, and obtain the first actual two-dimensional position of the feature point on the real target in the first projected image; use the projected image feature extraction network to perform feature extraction on the first projected image to obtain the first projected feature map ; Use the position prediction network to determine the first projection feature location corresponding to the first actual two-dimensional position on the first projection feature map, and find the first projection feature on the first projection feature map in the second real feature map The real feature position corresponding to the position, the real feature position is used to
- the results of the virtual data are used to supervise the real data training and improve the training.
- the effect of this also makes it easier to apply the image registration model trained on real data to the real environment.
- the real two-dimensional image is used to further train the image registration model, which can reduce the large number of real two-dimensional images required for training, so that the cost of training the image registration model can be reduced. , It is easier to carry out related training.
- the aforementioned use of the difference between the actual registration result and the predicted registration result to adjust the network parameters of the real image feature extraction network includes: using the difference between the second real feature map and the first virtual feature map, and the actual configuration Adjust the network parameters of the real image feature extraction network for the difference between the alignment result and the predicted alignment result. Therefore, by further using the difference between the second real feature map and the first virtual feature map, the difference between the actual registration result and the predicted registration result, the network parameters of the real image feature extraction network are adjusted to improve Training effect.
- the above method further includes the following steps to pre-train the image registration model: acquiring at least one set of virtual two-dimensional images and a second projection image, and acquiring feature points on the virtual target in the first virtual two-dimensional image. 2. The actual two-dimensional position and the third actual two-dimensional position in the second projection image, where the virtual two-dimensional image is obtained by simulated imaging of the virtual target, and the second projected image is obtained by simulated projection of the virtual target; Each group of virtual two-dimensional image, second projection image, and third actual two-dimensional position are input to the image registration model to obtain the second predicted two-dimensional position of the feature point on the virtual target in the virtual two-dimensional image; based on the second actual The two-dimensional position and the second predicted two-dimensional position adjust the network parameters of the image registration model.
- the training cost can be reduced.
- virtual images can be generated in large batches, so a large amount of training data can be provided, and thus the effect of training can be improved.
- the training effect can be improved, so that the image registration model after real image training can better register real images .
- each group of virtual two-dimensional image, the second projection image, and the third actual two-dimensional position are input to the image registration model to obtain the second predicted two-dimensional position of the feature point on the virtual target in the virtual two-dimensional image.
- the position prediction network of the image registration model the second projection feature position corresponding to the third actual two-dimensional position is determined on the second projection feature map, and the second projection feature map is found in the second virtual feature map
- the second predicted two-dimensional position is obtained by using the virtual feature position.
- the second projected feature map and the second virtual feature map are obtained through the projected image feature extraction network and the virtual image feature extraction network respectively.
- the two features After the extraction network is trained the feature extraction of each image can be more accurate.
- the adjustment of the network parameters of the image registration model based on the second actual two-dimensional position and the second predicted two-dimensional position includes: based on the second actual two-dimensional position and the second predicted two-dimensional position, the virtual image
- the network parameters of the feature extraction network, the projection image feature extraction network and the position prediction network are adjusted. Therefore, by adjusting the network parameters of the virtual image feature extraction network, the projection image feature extraction network, and the position prediction network, the training effect of the image registration model can be improved.
- the searching for the real feature position corresponding to the first projection feature position on the first projection feature map in the second real feature map includes: searching in the first projection feature map Extracting the first feature information located at the first projection feature position; in the second real feature map, searching for the second feature information whose similarity with the first feature information satisfies a preset similarity condition; Acquire the real feature position of the second feature information in the second real feature map. Therefore, by searching for corresponding feature points through feature information, the training result can be adjusted according to the type and type of feature information, which is conducive to improving the training effect.
- the above-mentioned finding in the second virtual feature map the virtual feature position corresponding to the second projection feature position on the second projection feature map includes: searching in the second projection feature map Extracting the first feature information located at the second projection feature position; in the second virtual feature map, searching for the second feature information whose similarity with the first feature information satisfies a preset similarity condition; Acquire the virtual feature position of the second feature information in the second virtual feature map. Therefore, by searching for corresponding feature points through feature information, the training result can be adjusted according to the type and type of feature information, which is conducive to improving the training effect.
- each group of the above-mentioned virtual two-dimensional image and the second projection image includes a virtual two-dimensional image obtained by simulating imaging of a virtual target in a preset pose using the same second projection model parameter, and using the same second projection model parameter
- the second projection image obtained by performing simulated projection on the virtual target under the reference pose; wherein the second projection model parameters and/or preset poses corresponding to different sets of virtual two-dimensional images and the second projection images are different. Therefore, through the generation, the training of registration in multiple perspectives or multiple positions can be realized for the same target, so that the image registration model can be registered for images of the same target from different perspectives and positions, which improves the training effect and the image Applicability of the registration model.
- the above method further includes the following steps to pre-train the image registration model: using the second predicted two-dimensional positions of multiple virtual two-dimensional images corresponding to the same preset pose to determine the predicted three-dimensional positions of the feature points; Using the difference between the predicted 3D position of the feature point and the actual 3D position, the network parameters of the image registration model are adjusted. Therefore, by using the difference between the predicted three-dimensional position and the actual three-dimensional position to adjust the network parameters of the image registration model, the training effect can be further improved.
- acquiring each group of virtual two-dimensional images and the second projection image described above includes: performing simulated imaging of a virtual target in a preset pose with the same second projection model parameter to obtain a virtual two-dimensional image, and recording the second projection
- the feature points at the second actual two-dimensional position of the virtual two-dimensional image and the third actual two-dimensional position of the second projection image respectively include: determining at least one feature point on the virtual target of the reference pose; using the virtual two-dimensional
- the second projection model parameters and rigid body transformation parameters corresponding to the image are used to determine the second actual two-dimensional position of the feature point on the virtual two-dimensional image; and the second projection model parameter corresponding to the second projection image is used to determine that the feature point is in the first Second, the third actual two-dimensional position on the projected image.
- the projection model parameters when acquiring the virtual two-dimensional image and the second projection image and the rigid body transformation parameters of the preset pose relative to the reference pose, these parameters can be used as a basis for comparison when training the image registration model later.
- the above determining at least one feature point on the virtual target in the reference pose includes: randomly selecting at least one feature point on the virtual target in the reference pose; or; identifying the corresponding virtual target in the second projection image Target area, select at least one projection point inside or on the edge of the target area, and use the second projection model parameter of the second projection image to project the at least one projection point into the three-dimensional space to obtain at least one feature on the virtual target point.
- the feature points can be used to assist the registration training, which is convenient for the development of training and the effect of physical training.
- the feature points can be easily found during subsequent registration training, thereby improving the training efficiency of the image registration model.
- the second aspect of the embodiments of the present disclosure provides an image registration method.
- the registration method includes: acquiring a two-dimensional image and a three-dimensional image obtained by imaging the target respectively; using the projection model parameters of the two-dimensional image to project the three-dimensional image to obtain the projected image; using the image registration model to project and project the two-dimensional image
- the image is processed to obtain the two-dimensional position of the feature point on the target on the two-dimensional image; the two-dimensional position is used to obtain the registration result between the two-dimensional image and the three-dimensional image; wherein the image registration model is determined by the first
- the image registration model provided by the aspect is trained. Therefore, by using the image registration model trained by the method of the image registration model provided by the above-mentioned first aspect, the two-dimensional image and the three-dimensional image obtained by imaging the target can be registered, and the result of the registration is better. precise.
- the aforementioned use of the two-dimensional position to obtain the registration result between the two-dimensional image and the three-dimensional image includes: using the projection model parameters to project the two-dimensional position into the three-dimensional space to obtain the first three-dimensional position of the feature point on the real target Obtain the second three-dimensional position of the characteristic point on the real target on the three-dimensional image; use the first three-dimensional position and the second three-dimensional position to obtain the rigid body transformation parameters of the three-dimensional image relative to the two-dimensional image. Therefore, by using the first three-dimensional position and the second three-dimensional position of the feature point on the real target, the rigid body transformation parameters of the three-dimensional image relative to the two-dimensional image can be obtained, so that the above-mentioned image registration method can be applied to image registration.
- a third aspect of the embodiments of the present disclosure provides a training device for an image registration model.
- the device includes a first acquisition module configured to acquire a real two-dimensional image and a reference two-dimensional image, wherein the real two-dimensional image is paired with an imaging device.
- the real target is obtained by imaging, and the position of the real target in the reference 2D image is matched with the real 2D image;
- the first feature extraction module is configured to use the virtual image feature extraction network of the image registration model to feature the reference 2D image
- the first virtual feature map is obtained by extraction; wherein the image registration model has been pre-trained with virtual images, and the virtual image feature extraction network participates in the pre-training, and the virtual image is generated based on the virtual target;
- the second feature extraction module is configured to Use the real image feature extraction network of the image registration model to perform feature extraction on the real two-dimensional image to obtain the first real feature map; among them, the real image feature extraction network does not participate in pre-training;
- the adjustment module is configured to use the first real feature map Adjust the network parameters of the real image feature extraction network based on the difference between the first virtual feature map.
- the real image feature extraction network can be adjusted according to the difference between the first real feature map and the first virtual feature map by using the image registration model pre-trained by virtual images.
- the results of virtual data can be used to supervise real data training, thereby improving the training effect of the image registration model training device, so that the training device of the image registration model can be used for subsequent training, and it is easier to apply in the real environment .
- a fourth aspect of the embodiments of the present disclosure provides an image registration device.
- the device includes a second acquisition module configured to acquire a two-dimensional image and a three-dimensional image obtained by imaging the target respectively; a projection module configured to use the projection model parameters of the two-dimensional image to project the three-dimensional image to obtain a projected image; a prediction module , Configured to use the image registration model to process the two-dimensional image and the projection image to obtain the two-dimensional position of the feature point on the target on the two-dimensional image; the registration module is configured to use the two-dimensional position to obtain the two-dimensional image and The registration result between the three-dimensional images; wherein the image registration model is obtained by training the device described in the third aspect.
- the first real feature map obtained by the real image feature extraction network can be correlated with the first virtual feature map.
- the results of virtual data can be used to supervise real data training, thereby improving the training effect of the image registration model training device, so that the training device of the image registration model can be used for subsequent training, and it is easier to apply in the real environment .
- a fifth aspect of the embodiments of the present disclosure provides an image registration device.
- the device includes: a processor and a memory coupled to each other, wherein the processor is configured to execute a computer program stored in the memory to execute the image registration model training method described in the first aspect above, or the image registration method described in the second aspect above method.
- a sixth aspect of the embodiments of the present disclosure provides a computer-readable storage medium.
- the medium stores a computer program that can be executed by the processor, and the computer program is used to implement the method described in the first aspect or the second aspect.
- a seventh aspect of the embodiments of the present disclosure provides a computer program product.
- the program product stores one or more program instructions, and the program instructions are loaded and executed by the processor to implement the method described in the first aspect or the second aspect.
- the embodiments of the present disclosure can adjust the network parameters of the real image feature extraction network according to the difference between the first real feature map and the first virtual feature map, so that the real image
- the first real feature map obtained by the feature extraction network can correspond to the first virtual feature map.
- the results of virtual data can be used to supervise real data training, thereby improving the training effect of the image registration model training device, so that the training device of the image registration model can be used for subsequent training, and it is easier to apply in the real environment .
- FIG. 1 is a schematic flowchart of a training method of an image registration model according to an embodiment of the present disclosure
- FIG. 2 is a schematic flowchart of a training method of an image registration model according to an embodiment of the present disclosure
- FIG. 3 is a schematic diagram of the first process of the training method of the image registration model according to the embodiment of the present disclosure
- FIG. 4 is a schematic diagram of the second process of the training method of the image registration model according to the embodiment of the present disclosure
- FIG. 5 is a schematic diagram of the third process of the training method of the image registration model according to the embodiment of the present disclosure.
- FIG. 6 is a schematic diagram of the fourth process of the training method of the image registration model according to the embodiment of the present disclosure.
- FIG. 7A is a schematic flowchart of an embodiment of an image registration method according to an embodiment of the present disclosure.
- Fig. 7B is a logic flow chart of an embodiment of an image registration method according to an embodiment of the present disclosure.
- FIG. 7C is a schematic diagram of determining projection coordinates of feature points on a virtual two-dimensional image according to an embodiment of the present disclosure
- FIG. 7D is a schematic diagram of a training process of a real X-ray image feature extraction network provided by an embodiment of the present disclosure
- FIG. 8 is a schematic diagram of a framework of an embodiment of a training device for an image registration model according to an embodiment of the present disclosure
- FIG. 9 is a schematic diagram of a framework of an embodiment of an image registration device according to an embodiment of the present disclosure.
- FIG. 10 is a schematic block diagram of the structure of an embodiment of an image registration device according to an embodiment of the present disclosure.
- FIG. 11 is a schematic diagram of a framework of an implementation of a storage device according to an embodiment of the present disclosure.
- FIG. 1 is a schematic flowchart of a training method of an image registration model according to an embodiment of the present disclosure.
- Step S10 Obtain a real two-dimensional image and a reference two-dimensional image, where the real two-dimensional image is obtained by imaging a real target using an imaging device, and the location of the real target in the reference two-dimensional image matches the real two-dimensional image.
- the virtual image in order to realize the training of the image registration model, can be used to pre-train the previous image registration model.
- the previous image registration model includes a virtual image feature extraction network.
- the image feature extraction network is used for feature extraction of virtual two-dimensional images.
- the real image feature extraction network is added to the pre-trained image registration model to further train the real image feature extraction network to obtain the final image registration model.
- real image data and a pre-trained virtual image feature extraction network can be used to execute the method of the embodiments of the present disclosure to adjust the network parameters of the real image feature extraction network.
- the real two-dimensional image is obtained by imaging a real target using an imaging device.
- the real target is, for example, a cup in a real environment, a bone of a human body, and so on.
- the imaging device is, for example, a camera, an X-ray machine, a CT (Computed Tomography, electronic computer tomography) and other equipment with imaging functions.
- the location of the real target in the reference two-dimensional image matches the real two-dimensional image. It can be that the location of the real target in the reference two-dimensional image is the same as the location of the real target in the real two-dimensional image, or it can be the location of the real target in the reference two-dimensional image.
- the rigid body transformation parameters of the three-dimensional real target corresponding to the real target and the real two-dimensional image are known. The same position can be understood as the angle, shape, and size of the real target in the real two-dimensional image and the reference two-dimensional image.
- the rigid body transformation parameters of the real target corresponding to the reference 2D image and the real target corresponding to the real 2D image are known, which can be understood as the space transformation of the real target corresponding to the reference 2D image compared to the real target corresponding to the real 2D image
- the process is known.
- the real target corresponding to the real two-dimensional image can use the known rigid body transformation parameters to obtain the real target whose position is consistent with the real target corresponding to the reference two-dimensional image.
- the reference two-dimensional image may be obtained by processing a real three-dimensional image of a real target. For example, using the actual registration result between the real two-dimensional image and the real three-dimensional image, a reference two-dimensional image whose position of the real target is consistent with the real two-dimensional image is generated.
- the reference two-dimensional image can also be obtained by imaging the real target again.
- the real three-dimensional image it can be obtained by shooting a real target with an imaging device that can take a three-dimensional image, for example, obtained by CT shooting or scanned by a 3D scanner.
- the three-dimensional image can also be obtained in the form of 3D modeling for the real target.
- the actual registration result of the real two-dimensional image and the real three-dimensional image means that the rigid body transformation parameters between the real target and the real three-dimensional image are known when the real two-dimensional image is taken. Since the pose of the real three-dimensional image obtained may have a rigid body transformation from the pose of the real target when the real two-dimensional image is taken, the actual registration result can be used to adjust the pose of the real three-dimensional image to make the pose of the real three-dimensional image It is consistent with the pose of the real target when the real two-dimensional image is taken. Pose is the placement of the real target, such as horizontally, vertically, or diagonally.
- a reference two-dimensional image in which the position of the real target on the reference two-dimensional image is consistent with the position in the real two-dimensional image can be obtained.
- Consistent position can be understood as the angle, shape, and size of the real target in the real two-dimensional image and the reference two-dimensional image.
- the method of generating the reference two-dimensional image is, for example, a method of projection.
- the projection method can be analog imaging or the like.
- the projection model parameters are the projection model parameters when shooting a real two-dimensional image. Because they are all based on the same pose of the three-dimensional image and the same projection model parameters, it is possible to project the three-dimensional image to obtain a reference two-dimensional image whose position of the real target is consistent with the real two-dimensional image.
- the reference two-dimensional image and the real two-dimensional image can be used in subsequent training.
- Step S11 Use the virtual image feature extraction network of the image registration model to perform feature extraction on the reference two-dimensional image to obtain a first virtual feature map; wherein the image registration model has been pre-trained using the virtual image, and the virtual image feature extraction network Participating in pre-training, virtual images are generated based on virtual targets.
- the image registration model may be a neural network model used to register images, for example, it may be a fully convolutional neural network or a convolutional neural network.
- the image registration model can include multiple neural networks, which can be adjusted as needed.
- the image registration model includes a virtual image feature extraction network, and the virtual image feature extraction network is a neural network, such as a convolutional neural network.
- the structure of the virtual image feature extraction network is not limited, as long as it can perform feature extraction.
- the virtual image is generated based on the virtual target.
- the virtual image may include a virtual two-dimensional image.
- the virtual target can be a target generated by artificial simulation, and the simulation can be any object that exists in the real environment, such as a cup, or the bones of various parts of the human body, and so on.
- the virtual image is generated by simulation, for example, by simulating projection. Because objects in the real environment are always connected to other objects or may overlap in a certain direction, for example, the bones of the human body are always connected to other bones of the human body or other muscle tissues or overlap in a certain direction. Therefore, when performing simulated imaging of a virtual target, other objects can also be simulated imaging, so that the generated virtual image can be closer to the image generated in the real environment.
- the image registration model can be trained for objects existing in the real environment, and the applicability of the image registration model is improved.
- the image registration model has been pre-trained with virtual images, which means that the image registration model has been trained with virtual images. For example, it uses multiple sets of virtual images as training data to register the virtual images and adjust the image registration.
- the network parameters of the model The virtual image feature extraction network participates in pre-training, which means that the image registration model registers the virtual image. When the network parameters of the image registration model are adjusted, the network parameters of the virtual image feature extraction network are also adjusted.
- the pre-trained image registration model can be used for virtual image registration.
- the image registration model in this step has been pre-trained for the virtual image.
- the virtual image feature extraction network of the image registration model is then used to perform feature extraction on the reference two-dimensional image to obtain the first virtual feature map.
- the output result of the virtual image feature extraction network may include the extracted feature information.
- the feature information is, for example, a feature vector, such as a 128-dimensional feature vector.
- the image registration model pre-trained by the virtual image is used, so that the network parameters of the image registration model can be adjusted in advance, which can speed up the use of real two-dimensional images.
- the progress of training with three-dimensional images can improve the efficiency of training.
- virtual two-dimensional images can be generated based on virtual targets, so a large number of virtual two-dimensional images can be generated as training data, and a large amount of training data can also improve the effect of training.
- the acquisition cost of virtual images is lower than that of real two-dimensional images. Pre-training the image registration model using virtual images can reduce the need for real two-dimensional images and reduce training costs.
- Step S12 Use the real image feature extraction network of the image registration model to perform feature extraction on the real two-dimensional image to obtain a first real feature map; wherein, the real image feature extraction network does not participate in pre-training.
- the image registration model further includes a real image feature extraction network for feature extraction of a real two-dimensional image, and the output result of the real image feature extraction network is defined as the first real feature map.
- the output result of the real image feature extraction network may include the extracted feature information.
- the feature information is, for example, a feature vector, such as a 128-dimensional feature vector.
- the real image feature extraction network does not participate in pre-training, which means that the real image feature extraction network has not undergone a pre-training process using virtual images.
- the real image feature extraction network does not participate in pre-training, so that when the real image feature extraction network is subsequently trained, the real two-dimensional image is used to improve the feature extraction of the real two-dimensional image by the real image feature extraction network. Effect.
- Step S13 Use the difference between the first real feature map and the first virtual feature map to adjust the network parameters of the real image feature extraction network.
- the two feature maps can be used to compare the difference in feature extraction between the real image feature extraction network and the virtual image feature extraction network, and compare the real image according to the difference
- the network parameters of the feature extraction network are adjusted.
- the difference may be the type of the extracted feature information, the dimension of the feature information, and so on.
- the first real feature map output by the real image feature extraction network and the first virtual feature map output by the virtual image feature extraction network are consistent in size. If the sizes of the two feature maps are inconsistent, the network parameters of the real image feature extraction network can be adjusted so that the two feature maps can be consistent.
- the feature information of the first real feature map and the first virtual feature map are consistent in type or have a high degree of similarity.
- the first real feature map will also contain feature vector information.
- the feature information of the first virtual feature map is a 256-dimensional feature vector
- the feature information of the first real feature map is also 256-dimensional, or close to 256-dimensional, such as 254-dimensional.
- the network parameters of the real image feature extraction network are adjusted according to the difference between the first real feature map and the first virtual feature map, so that the first real image feature extraction network obtains
- a real feature map can correspond to the first virtual feature map.
- the results of virtual data can be used to supervise the training of real data, thereby improving the training effect of the image registration model, so that the real image feature extraction network can be used for subsequent training, and the image registration model can be more easily applied to the real environment middle.
- After adjusting the network parameters of the real image feature extraction network it means that the image feature extraction network has met the requirements of subsequent training, and the training of the image registration model can be continued.
- FIG. 2 is a schematic flowchart of a training method of an image registration model according to an embodiment of the present disclosure.
- This embodiment is a continuous training process for the image registration model based on the above-mentioned Fig. 1.
- the embodiment of the present disclosure includes the following steps:
- Step S20 Obtain a real two-dimensional image and a reference two-dimensional image, where the real two-dimensional image is obtained by imaging a real target using an imaging device, and the location of the real target in the reference two-dimensional image matches the real two-dimensional image.
- Step S21 Use the virtual image feature extraction network of the image registration model to perform feature extraction on the reference two-dimensional image to obtain a first virtual feature map; wherein the image registration model has been pre-trained using the virtual image, and the virtual image feature extraction network Participating in pre-training, virtual images are generated based on virtual targets.
- Step S22 Use the real image feature extraction network of the image registration model to perform feature extraction on the real two-dimensional image to obtain a first real feature map; wherein, the real image feature extraction network does not participate in pre-training.
- Step S23 Use the difference between the first real feature map and the first virtual feature map to adjust the network parameters of the real image feature extraction network.
- the above steps S20 to S23 respectively correspond to the above steps S10 to S13 one to one.
- the image registration model further includes a projection image feature extraction network and a position prediction network that participate in pre-training.
- the projection image feature extraction network can be used to perform feature extraction on the projection image; the position prediction network can determine the location information of the feature points on each feature map according to the feature images extracted by each feature extraction network.
- the projection image feature extraction network and the position prediction network are pre-trained, which means that the two networks have used virtual images for pre-training and adjusted their corresponding network parameters. Among them, the projection image feature extraction network outputs the first projection
- the feature map and the first virtual feature map output by the virtual image feature extraction network are the same in size and type of feature information or have a higher degree of similarity. Based on this, the second true feature map is also the same as the first projected feature map in size and type of feature information or has a higher degree of similarity.
- the position prediction network is pre-trained, which means that the position prediction network can find the corresponding point according to the position of the feature point on the virtual feature map.
- embodiments of the present disclosure use the above-mentioned pre-trained projection image feature extraction network and position prediction network to continue training the real image feature extraction network.
- Step S24 Use the adjusted real image feature extraction network to perform feature extraction on the real two-dimensional image to obtain a second real feature map.
- step S23 the parameters of the real image feature extraction network have been adjusted, and the real two-dimensional image is extracted using the adjusted real image feature extraction network, and the output result is defined as the second real feature map.
- the second real feature map and the first virtual feature map are consistent in size and feature information or have a very high degree of similarity.
- Step S25 Project the real three-dimensional image by using the first projection model parameters of the real two-dimensional image to obtain the first projection image, and obtain the first actual two-dimensional position of the feature point on the real target in the first projection image.
- the corresponding projection model parameter is defined as the first projection model parameter.
- the first projection model parameters of the real two-dimensional image can be used to project the real three-dimensional image, and the obtained image is defined as the first A projected image.
- the projection method is, for example, an analog projection method.
- the image registration model can be trained by selecting feature points on the first projection image and using the location information of the feature points. Since the three-dimensional image is obtained from the real target, the feature point can be selected on the real target.
- the feature point can be understood as any point on the real target, which can be determined by analyzing the target position, or manually designated, or by the image
- the registration model is self-confirmed. Or determine the feature point in the first projection image, and then determine the position of the feature point on the real target.
- the actual three-dimensional position of the feature point on the three-dimensional image may be determined first, and then the first actual two-dimensional position of the feature point in the first projection image is obtained according to the projection model parameters.
- a three-dimensional coordinate system can be established for the three-dimensional image, so that the three-dimensional coordinates of the actual three-dimensional position of the feature point on the three-dimensional image can be obtained.
- the obtained first actual two-dimensional position is the two-dimensional coordinate, for example, the position (2, 2) of a certain pixel.
- Step S26 Use the projection image feature extraction network to perform feature extraction on the first projection image to obtain a first projection feature map.
- the projection image feature extraction network can be used to perform feature extraction on the first projection image, so that the first projection feature map can be obtained.
- the projection image feature extraction network is a neural network, such as a convolutional neural network.
- the structure of the projection image feature extraction network is not limited, as long as it can perform feature extraction.
- the result output by the network can be defined as the first projection feature map.
- each pixel corresponding to the first projection feature map will contain corresponding feature information.
- the feature information is, for example, a feature vector, such as a 128-dimensional feature vector.
- Step S27 Using the location prediction network, determine the first projection feature location corresponding to the first actual two-dimensional position on the first projection feature map, and find the first projection feature location on the first projection feature map in the second real feature map.
- the real feature location corresponding to the projected feature location is used to obtain the first predicted two-dimensional location of the feature point on the real target on the real two-dimensional image by using the real feature location.
- the position prediction network After acquiring the first actual two-dimensional position of the feature point in the first projection image, the position prediction network can be used to determine on the first projected feature map that it corresponds to the first actual two-dimensional position of the feature point on the first projection image The first projection feature position.
- the The position has a corresponding relationship with the position of the image used to extract the feature.
- the positions of the virtual two-dimensional image and the first virtual feature map, the real two-dimensional image and the first real feature map, the real two-dimensional image and the second real feature map, the first projected image and the first projected feature map all have Correspondence.
- each pixel on the feature map has a corresponding relationship with the pixel on the image used to extract the feature. The corresponding relationship of this position can be determined according to the size ratio of the feature map and the size of the image used to extract the feature.
- the size of the first projection feature map may be in an integer proportional relationship with the size of the first projection image. For example, if the size of the input first projection image is 256*256 pixels, the size of the output first projection feature map can be 256*256 pixels, or 128*128 pixels, or 512 *512 pixels. In this case, if the size of the first projected feature map and the first projected image are the same, and the same is 256*256 pixels, then when the third actual two-dimensional position of the feature point on the first projected image is certain When the position of a pixel is (1,1), the corresponding second projection feature position on the second projection feature map is also (1,1).
- the corresponding second projection feature positions on the second projection feature map are (1,1), (1,2), (2,1) Or at least one of (2, 2), or perform operations on these four pixels to obtain a new pixel, and use the position of the new pixel as the feature point in the second projection feature map
- the second projection feature location is, for example, interpolation calculation.
- the position prediction network can determine that the feature point is in the first projection image according to the corresponding relationship between the positions of the first projection image and the first projection feature map.
- a first projection feature position of a projection feature map For example, when the size of the first projected image and the first projected feature map are the same size, the first actual two-dimensional position of the feature point on the first projected image is the pixel point (5, 5), then the feature point is in the first The first projection feature position of the projection feature map is also the pixel point (5, 5).
- the position prediction network After determining the first projection feature position of the feature point in the first projection feature map, the position prediction network can find the real feature location corresponding to the first projection feature location on the first projection feature map in the second real feature map. And from the real feature position, the first predicted two-dimensional position on the real two-dimensional image is obtained. The first predicted two-dimensional position is the predicted position of the feature point on the real two-dimensional image.
- step 25 and step S26 is not limited, and can be adjusted according to actual needs.
- the “find out the real feature position corresponding to the first projection feature position on the first projection feature map in the second real feature map” described in this step can be implemented by the following steps:
- Step S271 Find the first feature information located at the first projection feature position in the first projection feature map.
- the location prediction network may determine the first feature information corresponding to the location according to the first projection feature location of the feature point in the first projection feature map. For example, when the projection feature location (first projection feature location) is the location of the pixel point (1, 1), the first feature information is the feature information corresponding to the pixel point (1, 1).
- the feature information may be a feature vector.
- Step S272 In the second real feature map, search for second feature information whose similarity with the first feature information satisfies a preset similarity condition.
- the position prediction network can search for the second true feature map based on the first feature information and the similarity with the first feature information meets the preset similarity condition The second feature information.
- the position prediction network can search for the second feature information that meets the preset similarity condition in the second real feature map according to the feature vector, and the second feature information is also a feature vector.
- the preset similarity condition can be set manually, for example, a similarity of 90%-95% means that the search result is acceptable.
- the preset similar conditions can be set according to the application scenario, and there is no limitation here.
- the preset similarity condition may be the second feature information corresponding to the highest similarity.
- Step S273 Acquire the real feature position of the second feature information in the second real feature map.
- each location in the feature map has corresponding feature information
- the corresponding real feature location in the second real feature map can be found based on the second feature information.
- the location prediction network After the location prediction network is pre-trained, it can find the corresponding feature points according to the locations of the feature points on the feature map of the virtual image. At this time, the feature map of the real image is further used to train the position prediction network to find the corresponding feature points on the feature map of the real image.
- the result of the virtual data can be used to supervise the training of the real data, improve the training effect, and make the image
- the registration model can be more easily applied to the real environment.
- the position prediction network can obtain the position of the feature point on the real two-dimensional image according to the position correspondence between the second real feature map and the real two-dimensional image. First predict the two-dimensional position.
- Step S28 Use the first predicted two-dimensional position to obtain the predicted registration result of the real two-dimensional image and the real three-dimensional image.
- the first predicted two-dimensional position After the first predicted two-dimensional position is obtained, it means that the predicted position of the feature point in the real two-dimensional image is obtained by the position prediction network.
- the predicted real three-dimensional position of the feature point on the real target when the real two-dimensional image is taken is obtained through conversion.
- the actual three-dimensional position of the feature point on the real three-dimensional image corresponding to the real target in the first projection image is also known.
- Step S29 Use the difference between the actual registration result and the predicted registration result to adjust the network parameters of at least one of the real image feature extraction network, the projected image feature extraction network, and the position prediction network.
- the predicted registration result can be compared with the actual registration result, and then the effect of the relevant network can be judged based on the difference of the comparison. For example, it is possible to predict the registration result and the actual registration result to obtain the relevant loss value, and then adjust the network parameters according to the size of the loss value.
- the virtual image can be used to pre-train the projection image feature extraction network and the position prediction network. Based on this, in order to enable the position prediction network to obtain better prediction results based on the feature information extracted by the real image feature extraction network and the feature information extracted by the projection image feature extraction network, the difference between the actual registration result and the predicted registration result can be used The difference is used as a reference factor to adjust the network parameters of the real image feature extraction network.
- step S23 although the difference between the first real feature map and the first virtual feature map has been used to adjust the network parameters of the real image feature extraction network, in order to make the feature map extracted by the real image feature extraction network compatible with the first
- the difference between a virtual feature map is smaller, or the feature map extracted by the real image feature extraction network can work better with the position prediction network. In this case, you can further use the second real feature map and the first virtual feature map.
- the difference between the actual registration result and the difference between the predicted registration result, and the network parameters of the real image feature extraction network are adjusted to improve the training effect.
- the difference between the predicted registration result and the actual registration result also reflects the accuracy of the prediction of the location prediction network.
- the prediction accuracy of the location prediction network is not only related to the real image feature extraction network, but also related to the projection image feature extraction network and the predicted location network. Therefore, the network parameters of the projection image feature extraction network and the position prediction network can be adjusted according to the difference between the predicted registration result and the actual registration result, so as to improve the prediction accuracy of the position prediction network. For example, the network parameters of each network are adjusted according to the loss value between the predicted registration result and the actual registration result.
- the above adjustment of the network parameters of the real image feature extraction network and the adjustment of the network parameters of the projection image feature extraction network and the predicted position network can be performed at the same time, or separately, or only the real image feature extraction
- the network parameters of the network are adjusted, or only the network parameters of the projection image feature extraction network and the predicted location network are adjusted.
- the accuracy of the location prediction network's prediction can be improved, there is no restriction on adjusting the network parameters.
- the above step S27 and the following steps can be re-executed, or re-executed
- the method described in the embodiment of the present disclosure continuously performs the process of searching for the first predicted two-dimensional position, calculating the loss value, and adjusting the network parameters until the requirements are met. Meeting the requirement here can be that the loss value is less than a preset loss threshold, and the loss value is no longer reduced.
- the embodiments of the present disclosure utilize virtual image feature extraction networks, projected image feature extraction networks, and position prediction networks that have undergone virtual image training to train together with real image feature extraction networks, thereby realizing the use of virtual data results to supervise real data training and improving
- the training effect also makes the image registration model trained on real data easier to apply in the real environment.
- the real two-dimensional image is used to further train the image registration model, which can reduce the large number of real two-dimensional images required for training, so that the cost of training the image registration model can be reduced, and It is easier to carry out relevant training.
- FIG. 3 is a schematic diagram of the first process of the training method of the image registration model according to the embodiment of the present disclosure.
- the embodiments of the present disclosure are related to the process of pre-training the image registration model mentioned in the above two embodiments, including the following steps:
- Step 31 Acquire at least one set of virtual two-dimensional image and second projection image, and acquire feature points on the virtual target at the second actual two-dimensional position of the virtual two-dimensional image and the third actual two-dimensional position of the second projected image respectively The position, where the virtual two-dimensional image is obtained by simulating imaging of the virtual target, and the second projection image is obtained by simulating projection of the virtual target.
- the virtual two-dimensional image is obtained by simulating imaging of the virtual target
- the second projection image is obtained by simulating projection of the virtual target.
- the virtual target can be an artificially set target, and can be any object that exists in the real environment, such as a cup, or the bones of various parts of the human body, and so on. Since objects in the real environment are always connected to other objects or may overlap in a certain direction, for example, the bones of the human body are always connected to other bones of the human body or other muscle tissues or overlap in a certain direction, Therefore, when performing simulated imaging of a virtual target, other objects can also be simulated imaging, so that the generated virtual two-dimensional image can be closer to the image generated in the real environment.
- the image registration model can be trained for objects existing in the real environment, and the applicability of the image registration model is improved.
- the way of simulated imaging may be to simulate the process of generating a two-dimensional image using a three-dimensional object in a real environment, such as the process of using an X-ray machine to generate an X-ray image.
- the ray tracing method can be used to project the simulated object through a point light source, that is, the simulated imaging method includes simulated projection.
- the second projection image is obtained by simulated projection of the virtual target.
- the second projection image may only include the virtual target itself, that is, only the virtual target is simulated and projected to generate a second projection image with only the virtual target.
- the image registration model can be made to perform relevant operations on the virtual target in a targeted manner, eliminating the influence of other objects.
- feature extraction is performed on the virtual target, which ensures that the extracted feature information is all valid feature information.
- Simulated projection can be a process of generating a two-dimensional image by projecting a three-dimensional object through a computer simulation, and can be achieved by methods such as ray tracing.
- the virtual two-dimensional image and the second projected image are generated in the virtual environment, and in the virtual environment, various parameters in the virtual environment are known, for example, the virtual two-dimensional image and the second projected image are generated When the corresponding projection model parameters. Therefore, in the case of artificially set various parameters, the results of the registration of the two images of the virtual two-dimensional image and the second projected image are known and accurate, that is, the generated virtual two-dimensional image and the second The projected images are all marked with automatic registration.
- the virtual two-dimensional image and the second projection image have been registered, it means that the position information on the virtual two-dimensional image can correspond to the position information on the second projection image.
- the position information of the point on the virtual two-dimensional image and the second projected image are both known.
- Using the registered virtual image to train the image registration model can make the registration result of the image registration model more accurate.
- both the virtual target and the second projection image can generate a two-dimensional image by simulating the process of projecting a three-dimensional object
- the projection model parameters and the pose of the virtual target will affect the generated two-dimensional image. Therefore, these two parameters can be set accordingly.
- the pose of the virtual target is the position and posture of the virtual object, such as the position of the virtual object in the virtual environment, or the placement posture of the virtual object, such as horizontal placement, vertical placement, or diagonal placement.
- the projection model parameters are the various parameters involved in the process of simulating the projection, such as the position of the point light source, the angle of the point light source, the distance between the point light source and the virtual target, and so on.
- each group of virtual two-dimensional images and second projection images includes virtual two-dimensional images obtained by simulating imaging of a virtual target in a preset pose using the same second projection model parameter, and using the same
- the second projection model parameter simulates the second projection image obtained by projecting the virtual target in the reference pose.
- the second projection model parameters can be set in advance, and then the virtual two-dimensional image can be obtained according to the set projection model parameters.
- a virtual two-dimensional image is generated in advance, and then the corresponding second projection model parameters are recorded. That is, the second projection model parameters of the virtual two-dimensional image and the second projection image of the same group are the same.
- the pose of the virtual target at this time is defined as the reference pose
- the pose of the virtual object when the virtual two-dimensional image is obtained is the preset pose.
- the reference pose can be the same as the preset pose, that is, there is no change in the virtual object.
- the reference pose may also be different from the preset pose, that is, the virtual target in the preset pose can be rotated, translated or reversed in the virtual space relative to the virtual target in the reference pose.
- the reference pose may be an artificially designated initial pose, that is, the preset poses are all obtained after translation or rotation of the reference pose. It can be understood that the second projection model parameters and/or preset poses corresponding to the virtual two-dimensional images and the second projection images of different groups are different.
- the image registration model is trained by generating multiple sets of virtual two-dimensional images and second projection images generated by different projection model parameters and/or preset poses, so that the trained image registration model can be used for different shooting perspectives and The images obtained in different poses are registered, which improves the applicability and accuracy of the image registration model.
- FIG. 4 is a schematic diagram of the second process of the training method of the image registration model according to the embodiment of the present disclosure.
- the "acquiring each group of virtual two-dimensional image and the second projection image" described in this step may include the following steps:
- Step S311 Perform simulated imaging on the virtual target in the preset pose using the same second projection model parameter to obtain a virtual two-dimensional image, and record the second projection model parameters and the relative value of the virtual target in the preset pose relative to the reference pose Rigid body transformation parameters.
- the preset pose and the reference pose may be artificially set positions and poses of the virtual target in the virtual three-dimensional space.
- the adjustment of the virtual target from the reference pose to the preset pose can also be preset. That is, the rigid body transformation process of adjusting the virtual target from the reference pose to the preset pose is known. That is to say, the rigid body transformation parameters of the virtual target with the preset pose relative to the reference pose can be obtained.
- a virtual target in a preset pose a virtual two-dimensional image can be obtained by performing simulated imaging according to the set second projection model parameters.
- the subsequent training of the image registration model needs to use the second projection model parameters that have been set and the rigid body transformation parameters of the virtual target adjusted from the reference pose to the preset pose, it is possible to obtain the virtual two-dimensional image , And record the corresponding projection model parameters and rigid body transformation parameters at the same time.
- these parameters can be used as the basis for comparison in the subsequent training of the model to compare the network of the image feature model Parameters are performed, thereby improving the training effect of the image registration model.
- Step S312 Perform simulated projection on the virtual target in the reference pose using the same second projection model parameter to obtain a second projection image.
- the same second projection model parameters as the virtual two-dimensional image can be further used to simulate the projection of the virtual target to obtain the second projected image.
- many sets of virtual two-dimensional images and second projected images can be generated.
- their projection model parameters and preset poses may be different, or may be partially different.
- the difference in projection model parameters can be that only one of the projection model parameters is changed, such as the angle of the point light source (ie, the shooting angle), or multiple or all of the parameters are changed.
- the different preset poses means that the virtual target objects corresponding to different sets of virtual two-dimensional images are compared, and there is a rigid body transformation of translation, rotation or inversion.
- the image registration model is trained by generating multiple sets of virtual two-dimensional images with different projection model parameters, preset poses, and second projection images, so that the trained image registration model can be used for different shooting perspectives and different poses.
- the images obtained below are registered, which improves the applicability and accuracy of the image registration model.
- the aforementioned virtual two-dimensional image may be an analog X-ray image
- the second projection image is a digitally reconstructed radiograph (DRR) image
- the aforementioned image registration model can be used for image registration in the medical field.
- the trained image registration model can register the X-ray image and the digitally reconstructed radiographic image, which improves The training effect of image registration model on this type of image registration.
- the projection model parameters when acquiring the virtual two-dimensional image and the second projection image and the rigid body transformation parameters of the preset pose relative to the reference pose, these parameters can be used as a basis for comparison when training the image registration model later.
- the image registration model can be trained by the method of selecting feature points and using the location information of the feature points. For example, at least one feature point can be determined on the virtual target, because various parameters of the virtual target in the virtual environment are known, and the second projection model parameters of the virtual two-dimensional image and the second projection image are generated, and their The rigid body transformation parameters are also known, so it is possible to determine the second actual two-dimensional position of the feature point in the virtual two-dimensional image and the third actual two-dimensional position of the second projection image.
- the second actual two-dimensional position and the third actual two-dimensional position may be two-dimensional coordinates.
- the feature point can be a pixel point on the virtual two-dimensional image and the second projection image, then the second actual two-dimensional position and the third actual position of the feature point on the virtual two-dimensional image and the second projected image
- the two-dimensional position can be the position of the pixel, such as the pixel (1, 1), the pixel (10, 10), and so on.
- a three-dimensional coordinate system can be established in the virtual environment to determine the three-dimensional coordinates of the characteristic points, and then the second actual two-dimensional position and the corresponding rigid body transformation parameters are calculated through the second projection model parameters and the corresponding rigid body transformation parameters The third actual two-dimensional position.
- FIG. 5 is a schematic diagram of the third process of the training method of the image registration model according to the embodiment of the present disclosure.
- acquiring the feature points on the virtual target at the second actual two-dimensional position of the virtual two-dimensional image and the third actual two-dimensional position of the second projected image respectively can be achieved through the following steps:
- Step S313 Determine at least one feature point on the virtual target in the reference pose.
- At least one feature point When at least one feature point is selected on the virtual target, it can be selected on the virtual target under the reference pose. Because the rigid body transformation parameters of the preset pose are obtained from the reference pose as the initial position, by selecting feature points on the virtual target under the reference pose, the subsequent calculation steps can be simplified to improve the image registration model. Calculation speed.
- At least one feature point can be randomly selected.
- the feature point can be located inside the virtual target or on the edge of the virtual target.
- the feature point can be understood as any point on the virtual target.
- the feature point can be determined by analyzing the position of the virtual target, or manually designated, or confirmed by the image registration model.
- the target area corresponding to the virtual target in the second projection image may be identified first to determine the position distribution of the virtual target in the second projection image. Then select at least one projection point on the inside or edge of the target area, and the selected projection point is the point on the virtual target.
- the second projection model parameter of the second projection image is used to project at least one projection point into the three-dimensional space to obtain at least one characteristic point on the virtual target.
- the second projection model parameters of the second projection image it is possible to obtain the point of the projection point on the virtual target in the three-dimensional space, and use the obtained point as the characteristic point.
- the feature points can be easily found during subsequent registration training, thereby improving the training efficiency of the image registration model.
- Step S314 Use the projection model parameters corresponding to the virtual two-dimensional image and the rigid body transformation parameters to determine the second actual two-dimensional position of the feature point on the virtual two-dimensional image; and use the projection model parameters corresponding to the second projection image to determine the feature The third actual two-dimensional position of the point on the second projected image.
- the third actual two-dimensional position of the feature point on the second projection image can be calculated according to the second projection model parameter corresponding to the second projection image.
- the third actual two-dimensional position can be calculated by using the feature points in the three-dimensional coordinates and the parameters of the second projection model.
- the preset pose has a rigid body transformation relative to the reference pose
- the corresponding rigid body transformation parameters are also required to obtain the characteristics.
- the point is the second actual two-dimensional position on the virtual two-dimensional image.
- the second actual two-dimensional position can be calculated by using the position of the feature point in the reference pose, the rigid body transformation parameters of the preset pose relative to the reference pose, and the second projection model parameters.
- the position information of the feature points can be used as a basis for comparison, so as to improve the training effect of the image registration model.
- Step S32 Input each group of virtual two-dimensional image, the second projection image and the third actual two-dimensional position into the image registration model to obtain the second predicted two-dimensional position of the feature point on the virtual target in the virtual two-dimensional image.
- the image registration model can be used to obtain the second predicted two-dimensional position of the feature point on the virtual target in the virtual two-dimensional image. It is understandable that since the second predicted two-dimensional position is obtained by prediction (that is, calculated by using a neural network) by the image registration model, the predicted result may be inaccurate. In the subsequent training process, the image registration model can be adjusted for the relevant network parameters for the second predicted two-dimensional position.
- FIG. 6 is a schematic diagram of the fourth process of the training method of the image registration model according to the embodiment of the present disclosure.
- Step S32 describes “input each group of virtual two-dimensional image, second projection image, and third actual two-dimensional position into the image registration model to obtain the second predicted two-dimensional image of the feature point on the virtual target. Location” can be achieved through the following steps:
- Step S321 Use the projection image feature extraction network of the image registration model to perform feature extraction on the second projection image to obtain a second projection feature map.
- the result output by the network is defined as the second projection feature map.
- each pixel corresponding to the second projection feature map will contain corresponding feature information.
- the feature information is, for example, a feature vector, such as a 128-dimensional feature vector.
- Step S322 Perform feature extraction on the virtual two-dimensional image by using the virtual image feature extraction network to obtain a second virtual feature map.
- the image output by the virtual image feature extraction network is defined as the second virtual feature map.
- each pixel corresponding to the second virtual feature map will also contain corresponding feature information.
- the feature information is, for example, a feature vector, such as a 128-dimensional feature vector.
- the virtual two-dimensional image and the second projection image, and the second projection feature map and the second virtual feature map have the same size. In this way, the positions of the feature points in the second projected feature map and the second virtual feature map can be directly determined by the pixel positions of the feature points in the virtual two-dimensional image and the second projected image.
- the second projected feature map and the second virtual feature map are obtained through the projected image feature extraction network and the virtual image feature extraction network respectively.
- the two feature extraction networks After training the feature extraction of each image can be more accurate.
- step S321 and step 322 are not limited, and can be adjusted according to actual needs.
- Step S323 Use the position prediction network of the image registration model to determine the second projection feature position corresponding to the third actual two-dimensional position on the second projection feature map, and find the second projection feature in the second virtual feature map The virtual feature position corresponding to the second projected feature position on the map is used to obtain the second predicted two-dimensional position.
- the location prediction network can use the third actual two-dimensional location to determine its second projection feature on the second projection feature map Location. Furthermore, the position prediction network finds out the virtual feature position corresponding to the second projected feature position in the second virtual feature map, and obtains the second predicted two-dimensional position on the virtual two-dimensional image from the virtual feature position. The second predicted two-dimensional position is the predicted position of the feature point on the virtual two-dimensional image.
- the "find out the virtual feature position corresponding to the second projection feature position on the second projection feature map in the second virtual feature map" described in this step can be implemented through the following steps:
- Step S3231 Find the first feature information located at the projection feature position in the projection feature map.
- the projection feature map in this step is the second projection feature map.
- the projection feature position in this step is the second projection feature position.
- the second projection feature position of the feature point on the map can be determined first on the second projection feature map, that is, the projection feature position of this step, and then the corresponding feature information can be obtained according to the projection feature position.
- the characteristic information of the pixel at the projected characteristic position is the first characteristic information.
- the first feature information may be an n-dimensional feature vector.
- Step S3232 In the virtual feature map, search for second feature information whose similarity with the first feature information satisfies a preset similarity condition.
- the virtual feature map in this step is the second virtual feature map.
- the position prediction network can search for the similarity between the second virtual feature map and the first feature information according to the first feature information to meet the preset similarity The second feature information of the condition.
- the position prediction network can search for the second feature information that meets the preset similarity condition in the second virtual feature map according to the feature vector, and the second feature information is also a feature vector.
- the preset similarity condition can be set manually, for example, a similarity of 90% to 95% means that the search result is acceptable.
- the preset similar conditions can be set according to the application scenario, and there is no limitation here.
- the preset similarity condition may be the second feature information corresponding to the highest similarity.
- Step S3233 Obtain the virtual feature position of the second feature information in the virtual feature map.
- each location in the feature map has corresponding feature information
- the corresponding virtual feature location in the second virtual feature map can be found based on the second feature information.
- Step S33 Adjust the network parameters of the image registration model based on the second actual two-dimensional position and the second predicted two-dimensional position.
- the second predicted two-dimensional location can be compared with the second actual two-dimensional location to determine whether the second predicted two-dimensional location predicted by the location prediction network meets the requirements , And then adjust the network parameters of the image registration model.
- the prediction result of the position prediction network can be considered acceptable. For example, if the comparison loss between the two meets the requirements, the result is considered acceptable.
- the feature information extracted by the virtual image feature extraction network and the projection image feature extraction network will directly affect the location prediction network to use the feature information to search for the second feature information and its corresponding location. Therefore, in the training process Based on the results of the comparison, it is necessary to adjust the network parameters of the virtual image feature extraction network, the projection image feature extraction network and the location prediction network.
- the three networks can cooperate with each other, and finally the second predicted two-dimensional position can meet the requirements compared with the second actual two-dimensional position.
- it is also possible to adjust the network parameters of only some of the three networks for example, only adjust the parameters of the projection image feature extraction network and the position prediction network.
- the actual three-dimensional position obtained from the second actual two-dimensional position can be further used.
- the position and the predicted three-dimensional position obtained from the second predicted two-dimensional position are compared, and the network parameters of the image registration model are adjusted according to the difference between the two.
- the second predicted two-dimensional position of multiple virtual two-dimensional images corresponding to the same preset pose may be used to determine the predicted three-dimensional position of the feature point.
- the corresponding predicted three-dimensional position can be obtained.
- the virtual two-dimensional image is obtained when the virtual target is in the preset pose, so the predicted three-dimensional position obtained from the second predicted two-dimensional position is that the virtual target corresponding to the virtual two-dimensional image is in the preset
- the predicted three-dimensional position in the pose can be obtained corresponding to multiple projection model parameters.
- the projection model parameters are changed, the corresponding predicted three-dimensional position will also be corresponding. Change.
- the difference between the predicted three-dimensional position of the feature point and the actual three-dimensional position can be used to adjust the network parameters of the image registration model. Since the projection model parameters used in the generation of the second projection image and the generation of the virtual two-dimensional image are both the second projection model parameters, and the second projection image is obtained when the virtual target is in the reference pose. Therefore, the actual three-dimensional position of the feature point under the reference pose can be obtained according to the second actual two-dimensional position and the second projection model parameters. After the actual three-dimensional position is obtained, it can be compared with the predicted three-dimensional position, and then the network parameters of the image registration model can be adjusted according to the difference between the two, such as the loss value. Therefore, by using the difference between the predicted three-dimensional position and the actual three-dimensional position to adjust the network parameters of the image registration model, the training effect can be further improved.
- the image registration model includes three neural networks of virtual image feature extraction network, projection image feature extraction network and position prediction network.
- the three networks are To adjust the network parameters.
- the above steps S32 to S33 may be re-executed or the method described in the embodiment of the present disclosure may be re-executed , So as to continuously perform the search for the second predicted two-dimensional position, the calculation of the loss value of the image registration model, and the adjustment process of its network parameters until it meets the requirements. Meeting the requirement can be that the loss value is less than a preset loss threshold, and the loss value is no longer reduced.
- the training cost can be reduced.
- virtual images can be generated in large batches, so a large amount of training data can be provided, and thus the effect of training can be improved.
- the training effect can be improved, so that the image registration model after real image training can better register real images .
- the above-mentioned reference two-dimensional image and real two-dimensional image may be X-ray images
- the first projection image is a digitally reconstructed radiographic image
- the second projection image may also be a digitally reconstructed radiographic image.
- the aforementioned image registration model can be used for image registration in the medical field.
- the first projection image is a digitally reconstructed radiographic image, so that the trained image registration model can be used to register the X-ray image and the digitally reconstructed radiographic image. It improves the training effect of the image registration model on this type of image registration.
- the aforementioned real image feature extraction network can be directly obtained from the aforementioned virtual image feature extraction network, that is, the virtual image feature extraction network is used as a real image feature extraction network.
- the image registration model includes a virtual image feature extraction network (real image feature extraction network), a projection image feature extraction network, and a position prediction network.
- the image registration model by directly using the virtual image feature extraction network as the real image feature extraction network, the number of neural networks is reduced, the image registration model training process is simplified, and the training of the image registration model is easier to carry out, making The image registration model can be more easily applied to the real environment.
- FIG. 7A is a schematic flowchart of an embodiment of an image registration method according to an embodiment of the present disclosure.
- the image registration model trained by the training method described in the foregoing embodiment can be used for registration.
- the image registration method may include the following steps:
- Step S71a Obtain a two-dimensional image and a three-dimensional image obtained by imaging the real target respectively.
- the real target may be imaged first to obtain a two-dimensional image and a three-dimensional image.
- the real target can be various objects in the real environment, such as cups, bones in the human body, and so on. Imaging the target means using various imaging methods, such as cameras, X-ray machines, 3D scanners, etc., to image the real target to obtain two-dimensional and three-dimensional images of the real target.
- the two-dimensional image is, for example, a two-dimensional picture obtained after imaging by a camera, or an X-ray image obtained after imaging by an X-ray machine.
- the three-dimensional image is, for example, a three-dimensional image scanned by a 3D scanner, or a three-dimensional image obtained by CT.
- Step S72a Project the three-dimensional image by using the projection model parameters of the two-dimensional image to obtain a projected image.
- the parameters of the projection model when imaging a real target to obtain a two-dimensional image can also be obtained at the same time.
- the three-dimensional image obtained from the real target can be projected according to the projection model parameters to obtain the projected image.
- the projection method can use a computer to simulate projection according to the projection model parameters.
- the size of the projected image and the two-dimensional image may be the same, for example, both have 256*256 pixels.
- Step S73a Use the image registration model to process the two-dimensional image and the projected image to obtain the two-dimensional position of the feature point on the real target on the two-dimensional image.
- the method of determining the feature points on the real target can be used to assist the registration.
- the feature points can be selected on the projection image, and the feature points can be selected to be located inside or on the edge of the above-mentioned target area on the projection image, so as to facilitate the subsequent search for the feature points and improve the registration efficiency.
- the actual two-dimensional position of the feature points on the projected image can be determined.
- feature points can also be selected on the three-dimensional image, so that the position of the feature points on the three-dimensional image can be determined, and then based on the projection model parameters when imaging the real target to obtain the two-dimensional image , Get the actual two-dimensional position of the feature point on the projected image.
- a three-dimensional coordinate system can be established in the virtual environment where the three-dimensional image is located, so that the three-dimensional coordinates of the feature points can be determined.
- the two-dimensional coordinates of the feature point on the projected image can be calculated, and the two-dimensional coordinates are the actual two-dimensional position of the feature point on the projected image.
- the positions of feature points in these two images can also be represented by the positions of pixel points corresponding to the feature points. For example, if the location of the pixel point corresponding to the feature point is (2, 2), the location of the feature point in the two-dimensional image and the projected image is also (2, 2).
- the image registration model processes the two-dimensional image and the projected image to obtain the two-dimensional position of the feature point on the real target on the two-dimensional image. It can include the following steps:
- Step S731a The image registration model performs feature extraction on the two-dimensional image and the projected image respectively to obtain a two-dimensional image feature map and a projected image feature map, and determine the projected feature position of the actual two-dimensional position on the projected image feature map.
- the image registration model includes a real image feature extraction network and a projection image feature extraction network. Therefore, the real image feature extraction network can be used to perform feature extraction on the two-dimensional image to obtain a two-dimensional image feature map; the projection image feature extraction network can be used to perform feature extraction on the projected image to obtain the projected image feature map.
- the pixels in the two feature maps can both contain feature information, and the feature information is, for example, a feature vector.
- the two-dimensional image feature map is obtained by feature extraction of the two-dimensional image by the real image feature extraction network
- the projected image feature map is obtained by feature extraction of the projected image by the projection image feature extraction network. Therefore, the position on the feature map has a corresponding relationship with the position on the two-dimensional image or the projection image.
- the corresponding relationship please refer to the related description of step S113, which will not be repeated here.
- the projected characteristic position of the characteristic point on the projected image can be determined according to the actual two-dimensional position of the characteristic point on the projected image.
- Step S732a Find the first feature information located at the projection feature position in the projection image feature map, and search for the second feature information whose similarity with the first feature information meets the preset requirements in the two-dimensional image feature map.
- step S271 and step S272 for this step.
- the second projection feature map in step S271 and step S272 is replaced with the projection image feature map under this step
- the second projection feature location is replaced with the projection feature location
- the second virtual feature map is replaced with a two-dimensional image feature map.
- Step S733a Obtain the predicted feature position of the second feature information in the two-dimensional image feature map, and use the predicted feature position to obtain the two-dimensional position.
- step S1133 for this step.
- the virtual feature position of step S1133 is replaced with the predicted feature position of this step, and the second predicted two-dimensional position is replaced with a two-dimensional position.
- Step S74a Use the two-dimensional position to obtain a registration result between the two-dimensional image and the three-dimensional image.
- Step S741a Project the two-dimensional position to the three-dimensional space by using the projection model parameters to obtain the first three-dimensional position of the feature point.
- the projection model parameters are calculated to obtain the three-dimensional position of the feature point on the target when the two-dimensional image is taken.
- the calculation method belongs to the general method in the field, and will not be repeated here.
- Step S742a Obtain the second three-dimensional position of the feature point on the real target on the three-dimensional image.
- step S63 the actual two-dimensional position of the feature point on the projected image has been determined when the feature point is selected, that is, the actual two-dimensional position of the feature point on the projected image is known. Based on the actual two-dimensional position, the parameters of the projection model when the real target is imaged to obtain the two-dimensional image can be used to obtain the actual three-dimensional position of the feature point on the three-dimensional image.
- Step S743a Use the first three-dimensional position and the second three-dimensional position to obtain the rigid body transformation parameters of the three-dimensional image relative to the two-dimensional image.
- the two-dimensional image and the three-dimensional image obtained by imaging the target can be registered, so that the points on the two-dimensional image can correspond to the points of the three-dimensional image.
- Two-dimensional-three-dimensional image rigid body registration can provide help in biomechanical analysis, surgical navigation, etc. Its purpose is to determine the position and posture of the target area in the three-dimensional image (such as CT image), so that it can be in one or The imaging on multiple two-dimensional images (such as X-ray images) is aligned.
- the current registration method based on optimization iteration takes a long time to run and cannot meet the real-time requirements; the registration method based on deep learning runs fast, but the existing method cannot meet the situation where the number of two-dimensional images is not fixed and the shooting angle of view is not fixed
- the registration requires a large amount of training data for training, otherwise the method will fail.
- manual image registration is time-consuming and inaccurate, it is difficult to obtain a large number of registered two-dimensional and three-dimensional images in a real application environment, which affects the further application of real-time registration methods. That is to say, there are the following problems in the related technology: the registration method based on the optimization method is slow; the registration model is trained for a specific perspective and cannot handle the situation of any perspective; the method fails when the training data is small.
- the embodiment of the present disclosure uses two neural networks to extract features of a real two-dimensional image and a registration-assisted two-dimensional image (such as a DRR image), respectively, to solve the situation that the number of two-dimensional images and the shooting angle of view are not fixed.
- a virtual two-dimensional image that is close to the real two-dimensional image is used to train the registration network. Since the virtual two-dimensional image can be generated indefinitely and the registration and annotation are accurate, this step can obtain a registration model with better results. Then, according to the corresponding relationship between the real two-dimensional image and the virtual two-dimensional image, train the registration model that can be used for the real two-dimensional image
- the training method of the registration network provided by the embodiment of the present disclosure includes two stages: the first stage is the virtual two-dimensional image training registration network and the second stage is the real two-dimensional image training migration.
- Fig. 7B is a logic flow chart of an embodiment of an image registration method according to an embodiment of the present disclosure. As shown in Fig. 7B, the first stage is implemented by the following steps S71b to S73b, and the second stage is implemented by the following steps S74b and S75b.
- Step S71b Generate a virtual two-dimensional image by simulating the different positions and postures of the target in the space in the three-dimensional image.
- a method such as ray tracing is used to generate a large number of virtual two-dimensional images similar to real two-dimensional images by simulating the different positions and postures of the target in space in the three-dimensional image.
- the rigid body transformation parameters and projection model parameters of the three-dimensional image in the three-dimensional space are recorded.
- Step S72b Generate a DRR image according to the initial position of the three-dimensional image in space and the projection model parameters.
- the generated DRR image is used to assist registration.
- multiple feature points inside or on the edge of the target to be registered in the three-dimensional image are selected and their positions in the three-dimensional image are recorded, so that the feature points are imaged on the DRR image after being projected.
- the projection model and the position and posture of the three-dimensional image in space the position of the feature point on the DRR image and the virtual two-dimensional image can be obtained.
- Step S73b By performing feature extraction on the virtual two-dimensional image and the DRR image, respectively, the projection coordinates of the feature points of the feature points in the DRR image on the virtual two-dimensional image are determined.
- the feature map of the virtual two-dimensional image and the feature map of the DDR image are extracted.
- multiple DRR images 71c are input to the DRR image feature extraction network 72c, and the top layer of the network outputs a feature map with the same size as the DRR and the feature dimension consistent with that of the virtual X-ray;
- the virtual X-ray image 74c is input to the virtual X-ray feature extraction network 75c, and the top layer of the network outputs a feature map with the same size as the virtual image and containing multi-dimensional features; according to the location of the feature point on the DRR image, the feature at the corresponding location in the feature map is extracted
- the vector is compared with the feature vector of the virtual X-ray to obtain the feature point projection coordinate 73c of the feature point on the virtual X-ray image.
- the positions of the feature points on the virtual X-ray images of multiple viewing angles are obtained according to the steps of the multi
- the back propagation trains the virtual X-ray feature extraction network and the DRR image feature extraction network.
- Step S74b Determine the registration result of the virtual two-dimensional image and the DDR image according to the projection model parameters and the projection coordinates of the feature points on the virtual two-dimensional image.
- the three-dimensional coordinates of the feature points are obtained from the projection coordinates of the feature points on the virtual two-dimensional image.
- the rigid body transformation parameters from the initial position to the real position are calculated, that is, the registration result of the virtual two-dimensional image and the DDR image.
- Step S75b Training a real two-dimensional image feature extraction network according to the registration result.
- the real X-ray image 72d is input into the real X-ray image feature extraction network 75d; the corresponding virtual X-ray image 73d is input into the form formed in step S73b
- the virtual X-ray image feature extraction network 77d calculates the difference between the output of the middle layer of the two networks, and backpropagates to train the real X-ray image feature extraction network 75d.
- use the feature error 78d and the registration error 76d to train the real X-ray image feature extraction network 75d, and use the DRR image 71d to train the DRR image feature extraction network 74d.
- the embodiments of the present disclosure implement a two-dimensional-three-dimensional image registration method, which can realize rapid registration of a three-dimensional image and several two-dimensional images with non-fixed viewing angles.
- a two-dimensional-three-dimensional image registration method which can realize rapid registration of a three-dimensional image and several two-dimensional images with non-fixed viewing angles.
- the speed is fast.
- the embodiment of the present disclosure proposes a training method for a registration network, which can improve the registration accuracy of the registration network when the training data is small, and can be applied when the amount of training data is small, and alleviate the registration based on deep learning.
- the method is difficult to apply to small data.
- the grid structure provided in the embodiments of the present disclosure processes the real image and the registration auxiliary image separately, and a single network processes all the viewing angles, and is applicable to two-dimensional images shot at any angle.
- the embodiments of the present disclosure can be applied to surgical navigation.
- the subject will take CT images of the knee joint before the operation, and X-ray images will be taken in real time during the operation. Intrusively obtain the position and posture of the subject's bones and reconstruct it, and integrate it into the surgical navigation system to achieve augmented reality display.
- the training device 80 includes: a first acquisition module 81, a first feature extraction module 82, a second feature extraction module 83, and a first adjustment module 84.
- the first acquisition module 81 is configured to acquire a real two-dimensional image and a reference two-dimensional image, where the real two-dimensional image is obtained by imaging a real target using an imaging device, and the position of the real target in the reference two-dimensional image is compared with the real two-dimensional image. Dimensional image matching.
- the first feature extraction module 82 is configured to use the virtual image feature extraction network of the image registration model to perform feature extraction on the reference two-dimensional image to obtain a first virtual feature map; wherein the image registration model has been pre-trained using the virtual image, And the virtual image feature extraction network participates in pre-training, and the virtual image is generated based on the virtual target.
- the second feature extraction module 83 is configured to use the real image feature extraction network of the image registration model to perform feature extraction on the real two-dimensional image to obtain the first real feature map; wherein the real image feature extraction network does not participate in pre-training.
- the first adjustment module 84 is configured to use the difference between the first real feature map and the first virtual feature map to adjust the network parameters of the real image feature extraction network.
- the first acquisition module 81 is configured to perform acquisition of the reference two-dimensional image, including: using the actual registration result between the real two-dimensional image and the real three-dimensional image to generate a reference two-dimensional image whose position of the real target is consistent with the real two-dimensional image.
- the training device 80 also includes a third feature extraction and prediction module and a second adjustment module.
- the third feature extraction prediction module is configured to perform use of the adjusted real image
- the feature extraction network performs feature extraction on the real two-dimensional image to obtain the second real feature map; uses the first projection model parameters of the real two-dimensional image to project the real three-dimensional image to obtain the first projected image and obtain the features on the real target The point is at the first actual two-dimensional position of the first projection image; the projection image feature extraction network is used to perform feature extraction on the first projection image to obtain the first projection feature map; the location prediction network is used to determine and The first projected feature position corresponding to the first actual two-dimensional position, the real feature position corresponding to the first projected feature position on the first projected feature map is found in the second real feature map, and the real target is obtained by using the real feature location The first predicted two-dimensional position of the feature point on the real two-dimensional image;
- the second adjustment module is configured to use the difference between the actual registration result and the predicted registration result to adjust the network parameters of the real image feature extraction network, including: using the difference between the second real feature map and the first virtual feature map , The difference between the actual registration result and the predicted registration result, adjust the network parameters of the real image feature extraction network.
- the training device 80 also includes a pre-training module.
- the pre-training module is configured to perform the following steps to pre-train the image registration model: acquire at least one set of virtual two-dimensional images and second projection images, and acquire feature points on the virtual target in the second virtual two-dimensional image.
- the actual two-dimensional position and the third actual two-dimensional position in the second projection image where the virtual two-dimensional image is obtained by simulated imaging of the virtual target, and the second projected image is obtained by simulated projection of the virtual target;
- the group of virtual two-dimensional images, the second projection image, and the third actual two-dimensional position are input to the image registration model to obtain the second predicted two-dimensional position of the feature point on the virtual target in the virtual two-dimensional image; based on the second actual two
- the two-dimensional position and the second predicted two-dimensional position are used to adjust the network parameters of the image registration model.
- the pre-training module is configured to perform input of each group of virtual two-dimensional images, the second projection image, and the third actual two-dimensional position into the image registration model to obtain the second prediction of the feature points on the virtual target in the virtual two-dimensional image.
- the dimensional position includes: using the projection image feature extraction network of the image registration model to perform feature extraction on the second projection image to obtain the second projection feature map; using the virtual image feature extraction network to perform feature extraction on the virtual two-dimensional image to obtain the second Virtual feature map; using the position prediction network of the image registration model to determine the second projection feature location corresponding to the third actual two-dimensional location on the second projection feature map, and find the second projection in the second virtual feature map
- the virtual feature location corresponding to the second projected feature location on the feature map is used to obtain the second predicted two-dimensional location.
- the pre-training module is configured to perform adjustments to the network parameters of the image registration model based on the second actual two-dimensional position and the second predicted two-dimensional position, including: based on the second actual two-dimensional position and the second predicted two-dimensional position, The network parameters of the virtual image feature extraction network, the projection image feature extraction network, and the position prediction network are adjusted.
- the third feature extraction and prediction module is configured to perform a search for the real feature position corresponding to the first projection feature position on the first projection feature map in the second real feature map, including: finding the projection feature position in the projection feature map In the virtual feature map or the real feature map, search for the second feature information whose similarity with the first feature information satisfies the preset similarity condition; obtain the virtual feature information in the virtual feature map The location of the feature or the location of the real feature in the real feature map.
- the pre-training module is configured to search for the virtual feature position corresponding to the second projection feature position on the second projection feature map in the second virtual feature map, including: finding the first projection feature position in the projection feature map.
- Feature information in the virtual feature map or the real feature map, search for the second feature information whose similarity with the first feature information satisfies the preset similarity condition; obtain the virtual feature position of the second feature information in the virtual feature map or The location of the real feature in the real feature map.
- Each group of virtual two-dimensional image and second projection image includes a virtual two-dimensional image obtained by using the same second projection model parameter to simulate and imaging a virtual target in a preset pose, and using the same second projection model parameter to compare the reference pose
- the second projection image obtained by simulating the projection of the virtual target below; wherein the second projection model parameters and/or preset poses corresponding to different sets of virtual two-dimensional images and the second projection images are different.
- the pre-training module is configured to perform the following steps to pre-train the image registration model: use the second predicted two-dimensional positions of multiple virtual two-dimensional images corresponding to the same preset pose to determine the predicted three-dimensional positions of the feature points; The difference between the predicted 3D position of the feature point and the actual 3D position is adjusted to the network parameters of the image registration model.
- the pre-training module is configured to execute the acquisition of each group of virtual two-dimensional images and second projection images, including: simulating imaging of a virtual target in a preset pose using the same second projection model parameter to obtain a virtual two-dimensional image, and recording the first Two projection model parameters and rigid body transformation parameters of the virtual target in the preset pose relative to the reference pose; the same second projection model parameter is used to simulate the projection of the virtual target in the reference pose to obtain a second projection image.
- the pre-training module is configured to perform acquiring the feature points on the virtual target at the second actual two-dimensional position of the virtual two-dimensional image and the third actual two-dimensional position of the second projection image, including: determining the virtual target in the reference pose At least one feature point on the virtual two-dimensional image; using the second projection model parameters and rigid body transformation parameters corresponding to the virtual two-dimensional image to determine the second actual two-dimensional position of the feature point on the virtual two-dimensional image; and using the second projection image corresponding to the The second projection model parameter determines the third actual two-dimensional position of the feature point on the second projection image.
- the pre-training module is configured to perform determining at least one feature point on the virtual target in the reference pose, including: randomly selecting at least one feature point on the virtual target in the reference pose; or; identifying the corresponding virtual target in the second projection image For the target area of the target, select at least one projection point inside or on the edge of the target area, and use the second projection model parameter of the second projection image to project the at least one projection point into the three-dimensional space to obtain at least one projection point on the virtual target.
- a characteristic point is configured to perform determining at least one feature point on the virtual target in the reference pose, including: randomly selecting at least one feature point on the virtual target in the reference pose; or; identifying the corresponding virtual target in the second projection image For the target area of the target, select at least one projection point inside or on the edge of the target area, and use the second projection model parameter of the second projection image to project the at least one projection point into the three-dimensional space to obtain at least one projection point on the virtual target.
- a characteristic point is configured to perform determining at least one feature point
- the image registration device 90 includes: a second acquisition module 91, a projection module 92, a prediction module 93, and a registration module 94.
- the second acquisition module 91 is configured to acquire the two-dimensional image and the three-dimensional image obtained by imaging the target respectively; the projection module 92 is configured to use the projection model parameters of the two-dimensional image to project the three-dimensional image to obtain the projected image; the prediction module 93 is configured In order to use the image registration model to process the two-dimensional image and the projected image to obtain the two-dimensional position of the feature point on the target on the two-dimensional image; the registration module 94 is configured to use the two-dimensional position to obtain the two-dimensional image and the three-dimensional image Among them, the image registration model is obtained by training the above-mentioned image registration model training device.
- the aforementioned registration module 94 may also be configured to use projection model parameters to project the two-dimensional position into the three-dimensional space to obtain the first three-dimensional position of the feature point.
- the aforementioned registration module 94 can also be configured to obtain the second three-dimensional position of the feature point on the real target on the three-dimensional image.
- the aforementioned registration module 94 may also be configured to use the first three-dimensional position and the second three-dimensional position to output a registration result of the two-dimensional image relative to the three-dimensional image.
- FIG. 10 is a schematic block diagram of the structure of an embodiment of an image registration device according to an embodiment of the disclosure.
- the image registration device includes a processor 101 and a memory 102 coupled to the processor.
- the processor 101 is configured to execute a computer program stored in the memory 102 to execute the above-mentioned training method of an image registration model, or an image registration method.
- the storage device 110 stores a computer program, and when the computer program is executed by a processor, the steps of the gain adjustment method in any of the foregoing embodiments can be implemented.
- the computer-readable storage medium storage device may be a U disk, a mobile hard disk, a read only memory (ROM, Random Access Memory), a magnetic disk or an optical disk, and other media that can store computer programs. Or it can also be a server storing the computer program, and the server can send the stored computer program to other devices to run, or it can run the stored computer program itself.
- the embodiments of the present disclosure also provide a computer program product, the computer program product stores program instructions, and the program instructions are loaded by a processor and execute the steps in the above-mentioned target data update method embodiment.
- the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of this embodiment.
- the functional units in the various embodiments of the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solutions of the embodiments of the present disclosure are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of software products, and the computer software products are stored in a storage
- the medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the various implementation methods of the embodiments of the present disclosure.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.
- the embodiment of the present disclosure acquires a real two-dimensional image and a reference two-dimensional image, and uses the virtual image feature extraction network of the image registration model to perform feature extraction on the reference two-dimensional image to obtain a first virtual feature map; wherein the image registration model has been used
- the virtual image is pre-trained, and the virtual image feature extraction network participates in the pre-training.
- the virtual image is generated based on the virtual target; the real image feature extraction network of the image registration model is used to extract the features of the real two-dimensional image to obtain the first real feature Figure; Among them, the real image feature extraction network is not involved in pre-training; the difference between the first real feature map and the first virtual feature map is used to adjust the network parameters of the real image feature extraction network.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
- Processing Or Creating Images (AREA)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021577511A JP7241933B2 (ja) | 2020-05-29 | 2020-12-14 | 画像レジストレーション方法及びそれに関係するモデルトレーニング方法、デバイス、装置 |
KR1020217042598A KR102450931B1 (ko) | 2020-05-29 | 2020-12-14 | 이미지 정합 방법 및 연관된 모델 훈련 방법, 기기, 장치 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010477508.6 | 2020-05-29 | ||
CN202010477508.6A CN111640145B (zh) | 2020-05-29 | 2020-05-29 | 图像配准方法及其相关的模型训练方法、设备、装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021238171A1 true WO2021238171A1 (zh) | 2021-12-02 |
Family
ID=72332237
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/136254 WO2021238171A1 (zh) | 2020-05-29 | 2020-12-14 | 图像配准方法及其相关的模型训练方法、设备、装置 |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP7241933B2 (ko) |
KR (1) | KR102450931B1 (ko) |
CN (1) | CN111640145B (ko) |
TW (1) | TWI785588B (ko) |
WO (1) | WO2021238171A1 (ko) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116188816A (zh) * | 2022-12-29 | 2023-05-30 | 广东省新黄埔中医药联合创新研究院 | 一种基于循环一致性变形图像匹配网络的穴位定位方法 |
CN117132507A (zh) * | 2023-10-23 | 2023-11-28 | 光轮智能(北京)科技有限公司 | 图像增强方法、图像处理方法、计算机设备及存储介质 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111640145B (zh) * | 2020-05-29 | 2022-03-29 | 上海商汤智能科技有限公司 | 图像配准方法及其相关的模型训练方法、设备、装置 |
CN114187337B (zh) * | 2021-12-07 | 2022-08-23 | 推想医疗科技股份有限公司 | 图像配准方法、分割方法、装置、电子设备以及存储介质 |
CN114453981B (zh) * | 2022-04-12 | 2022-07-19 | 北京精雕科技集团有限公司 | 工件找正方法及装置 |
JP7376201B1 (ja) * | 2023-09-13 | 2023-11-08 | アキュイティー株式会社 | 情報処理システム、情報処理方法及びプログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190096035A1 (en) * | 2017-09-28 | 2019-03-28 | Zhengmin Li | Super-resolution apparatus and method for virtual and mixed reality |
CN110197190A (zh) * | 2018-02-27 | 2019-09-03 | 北京猎户星空科技有限公司 | 模型训练和物体的定位方法及装置 |
CN110838139A (zh) * | 2019-11-04 | 2020-02-25 | 上海联影智能医疗科技有限公司 | 图像配准模型的训练方法、图像配准方法和计算机设备 |
CN111640145A (zh) * | 2020-05-29 | 2020-09-08 | 上海商汤智能科技有限公司 | 图像配准方法及其相关的模型训练方法、设备、装置 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9305235B1 (en) * | 2007-12-21 | 2016-04-05 | Cognex Corporation | System and method for identifying and locating instances of a shape under large variations in linear degrees of freedom and/or stroke widths |
KR102294734B1 (ko) * | 2014-09-30 | 2021-08-30 | 삼성전자주식회사 | 영상 정합 장치, 영상 정합 방법 및 영상 정합 장치가 마련된 초음파 진단 장치 |
CN107025650B (zh) * | 2017-04-20 | 2019-10-29 | 中北大学 | 一种基于多层p样条和稀疏编码的医学图像配准方法 |
CN112055870A (zh) * | 2018-03-02 | 2020-12-08 | 皇家飞利浦有限公司 | 图像配准合格评价 |
KR20190116606A (ko) * | 2018-04-04 | 2019-10-15 | (주)온넷시스템즈코리아 | 하이브리드 이미지 정합 장치 |
CA3100642A1 (en) * | 2018-05-21 | 2019-11-28 | Corista, LLC | Multi-sample whole slide image processing in digital pathology via multi-resolution registration and machine learning |
TWI709107B (zh) * | 2018-05-21 | 2020-11-01 | 國立清華大學 | 影像特徵提取方法及包含其顯著物體預測方法 |
CN109377520B (zh) * | 2018-08-27 | 2021-05-04 | 西安电子科技大学 | 基于半监督循环gan的心脏图像配准系统及方法 |
CN111127538B (zh) * | 2019-12-17 | 2022-06-07 | 武汉大学 | 一种基于卷积循环编码-解码结构的多视影像三维重建方法 |
-
2020
- 2020-05-29 CN CN202010477508.6A patent/CN111640145B/zh active Active
- 2020-12-14 KR KR1020217042598A patent/KR102450931B1/ko active IP Right Grant
- 2020-12-14 JP JP2021577511A patent/JP7241933B2/ja active Active
- 2020-12-14 WO PCT/CN2020/136254 patent/WO2021238171A1/zh active Application Filing
-
2021
- 2021-05-03 TW TW110115866A patent/TWI785588B/zh not_active IP Right Cessation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190096035A1 (en) * | 2017-09-28 | 2019-03-28 | Zhengmin Li | Super-resolution apparatus and method for virtual and mixed reality |
CN110197190A (zh) * | 2018-02-27 | 2019-09-03 | 北京猎户星空科技有限公司 | 模型训练和物体的定位方法及装置 |
CN110838139A (zh) * | 2019-11-04 | 2020-02-25 | 上海联影智能医疗科技有限公司 | 图像配准模型的训练方法、图像配准方法和计算机设备 |
CN111640145A (zh) * | 2020-05-29 | 2020-09-08 | 上海商汤智能科技有限公司 | 图像配准方法及其相关的模型训练方法、设备、装置 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116188816A (zh) * | 2022-12-29 | 2023-05-30 | 广东省新黄埔中医药联合创新研究院 | 一种基于循环一致性变形图像匹配网络的穴位定位方法 |
CN116188816B (zh) * | 2022-12-29 | 2024-05-28 | 广东省新黄埔中医药联合创新研究院 | 一种基于循环一致性变形图像匹配网络的穴位定位方法 |
CN117132507A (zh) * | 2023-10-23 | 2023-11-28 | 光轮智能(北京)科技有限公司 | 图像增强方法、图像处理方法、计算机设备及存储介质 |
CN117132507B (zh) * | 2023-10-23 | 2023-12-22 | 光轮智能(北京)科技有限公司 | 图像增强方法、图像处理方法、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
TWI785588B (zh) | 2022-12-01 |
TW202145146A (zh) | 2021-12-01 |
KR102450931B1 (ko) | 2022-10-06 |
CN111640145B (zh) | 2022-03-29 |
JP2022534123A (ja) | 2022-07-27 |
KR20220006654A (ko) | 2022-01-17 |
CN111640145A (zh) | 2020-09-08 |
JP7241933B2 (ja) | 2023-03-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021238171A1 (zh) | 图像配准方法及其相关的模型训练方法、设备、装置 | |
CN112767538B (zh) | 三维重建及相关交互、测量方法和相关装置、设备 | |
CN106780619B (zh) | 一种基于Kinect深度相机的人体尺寸测量方法 | |
CN104781849B (zh) | 单眼视觉同时定位与建图(slam)的快速初始化 | |
CN113012282B (zh) | 三维人体重建方法、装置、设备及存储介质 | |
CN110544301A (zh) | 一种三维人体动作重建系统、方法和动作训练系统 | |
US20180174311A1 (en) | Method and system for simultaneous scene parsing and model fusion for endoscopic and laparoscopic navigation | |
KR100793838B1 (ko) | 카메라 모션 추출장치, 이를 이용한 해상장면의 증강현실 제공 시스템 및 방법 | |
JP7164045B2 (ja) | 骨格認識方法、骨格認識プログラムおよび骨格認識システム | |
Barandiaran et al. | Real-time optical markerless tracking for augmented reality applications | |
CN111080776B (zh) | 人体动作三维数据采集和复现的处理方法及系统 | |
CN111862299A (zh) | 人体三维模型构建方法、装置、机器人和存储介质 | |
CN109613974B (zh) | 一种大场景下的ar家居体验方法 | |
US11403781B2 (en) | Methods and systems for intra-capture camera calibration | |
US20200057778A1 (en) | Depth image pose search with a bootstrapped-created database | |
CN109584321A (zh) | 用于基于深度学习的图像重建的系统和方法 | |
CN113886510B (zh) | 一种终端交互方法、装置、设备及存储介质 | |
JP2012113438A (ja) | 姿勢推定装置および姿勢推定プログラム | |
Jørgensen et al. | Simulation-based Optimization of Camera Placement in the Context of Industrial Pose Estimation. | |
CN105931231A (zh) | 一种基于全连接随机场联合能量最小化的立体匹配方法 | |
Sosa et al. | 3D surface reconstruction of entomological specimens from uniform multi-view image datasets | |
Villa-Uriol et al. | Automatic creation of three-dimensional avatars | |
Garau et al. | Unsupervised continuous camera network pose estimation through human mesh recovery | |
CN117315018B (zh) | 基于改进PnP的用户面部位姿检测方法、设备、介质 | |
RU2771745C1 (ru) | Способ отслеживания (трекинга) в реальном времени анатомических ориентиров объекта |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021577511 Country of ref document: JP Kind code of ref document: A Ref document number: 20217042598 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20937950 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937950 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937950 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.05.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937950 Country of ref document: EP Kind code of ref document: A1 |